ChatGPT in Clinical Medicine, Urology and Academia: A Review


Tzelves L., Kapriniotis K., Feretzakis G., Katsimperis S., Manolitsis I., Juliebø-Jones P., ...Daha Fazla

Archivos Espanoles de Urologia, cilt.77, sa.7, ss.708-717, 2024 (SCI-Expanded) identifier identifier identifier

  • Yayın Türü: Makale / Derleme
  • Cilt numarası: 77 Sayı: 7
  • Basım Tarihi: 2024
  • Doi Numarası: 10.56434/j.arch.esp.urol.20247707.99
  • Dergi Adı: Archivos Espanoles de Urologia
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, BIOSIS, EMBASE, Gender Studies Database, DIALNET
  • Sayfa Sayıları: ss.708-717
  • Anahtar Kelimeler: artificial intelligence, ChatGPT, machine learning, neural network, urology
  • İstanbul Üniversitesi Adresli: Evet

Özet

Background: This study aims to provide a comprehensive overview of the current literature on the utilisation of ChatGPT in the fields of clinical medicine, urology, and academic medicine, while also addressing the associated ethical challenges and potential risks. Methods: This narrative review conducted an extensive search of the PubMed and MEDLINE databases, covering the period from January 2022 to January 2024. The search phrases employed were “urologic surgery” in conjunction with “artificial intelligence”, “machine learning”, “neural network”, “ChatGPT”, “urology”, and “medicine”. The initial studies were chosen from the screened research to examine the possible interaction between those entities. Research utilising animal models was excluded. Results: ChatGPT has demonstrated its usefulness in clinical settings by producing precise clinical correspondence, discharge summaries, and medical records, thereby assisting in these laborious tasks, especially with the latest iterations of ChatGPT. Furthermore, patients can access essential medical information by inquiring with ChatGPT. Nevertheless, there are multiple concerns regarding the correctness of the system, including allegations of falsified data and references. These issues emphasise the importance of having a doctor oversee the final result to guarantee patient safety. ChatGPT shows potential in academic medicine for generating drafts and organising datasets. However, the presence of guidelines and plagiarism-detection technologies is necessary to mitigate the risks of plagiarism and the use of faked data when using it for academic purposes. Conclusions: ChatGPT should be utilised as a supplementary tool by urologists and academicians. However, it is now advisable to have human oversight to guarantee patient safety, uphold academic integrity, and maintain transparency.