Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing ChatGPT’s theoretical knowledge and prescriptive accuracy in bacterial infections: a comparative study with infectious diseases residents and specialists
29
Zitationen
21
Autoren
2024
Jahr
Abstract
OBJECTIVES: Advancements in Artificial Intelligence(AI) have made platforms like ChatGPT increasingly relevant in medicine. This study assesses ChatGPT's utility in addressing bacterial infection-related questions and antibiogram-based clinical cases. METHODS: This study involved a collaborative effort involving infectious disease (ID) specialists and residents. A group of experts formulated six true/false, six open-ended questions, and six clinical cases with antibiograms for four types of infections (endocarditis, pneumonia, intra-abdominal infections, and bloodstream infection) for a total of 96 questions. The questions were submitted to four senior residents and four specialists in ID and inputted into ChatGPT-4 and a trained version of ChatGPT-4. A total of 720 responses were obtained and reviewed by a blinded panel of experts in antibiotic treatments. They evaluated the responses for accuracy and completeness, the ability to identify correct resistance mechanisms from antibiograms, and the appropriateness of antibiotics prescriptions. RESULTS: No significant difference was noted among the four groups for true/false questions, with approximately 70% correct answers. The trained ChatGPT-4 and ChatGPT-4 offered more accurate and complete answers to the open-ended questions than both the residents and specialists. Regarding the clinical case, we observed a lower accuracy from ChatGPT-4 to recognize the correct resistance mechanism. ChatGPT-4 tended not to prescribe newer antibiotics like cefiderocol or imipenem/cilastatin/relebactam, favoring less recommended options like colistin. Both trained- ChatGPT-4 and ChatGPT-4 recommended longer than necessary treatment periods (p-value = 0.022). CONCLUSIONS: This study highlights ChatGPT's capabilities and limitations in medical decision-making, specifically regarding bacterial infections and antibiogram analysis. While ChatGPT demonstrated proficiency in answering theoretical questions, it did not consistently align with expert decisions in clinical case management. Despite these limitations, the potential of ChatGPT as a supportive tool in ID education and preliminary analysis is evident. However, it should not replace expert consultation, especially in complex clinical decision-making.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.687 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.591 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.114 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.867 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Autoren
- Andrea De Vito
- Nicholas Geremia
- Andrea Marıno
- Davide Fiore Bavaro
- Giorgia Caruana
- Marianna Meschiari
- Agnese Colpani
- Maria Mazzitelli
- Vincenzo Scaglione
- Emmanuele Venanzi Rullo
- Vito Fiore
- Marco Fois
- Edoardo Campanella
- Eugenia Pistarà
- Matteo Faltoni
- Giuseppe Nunnari
- Anna Maria Cattelan
- Cristina Mussini
- Michele Bartoletti
- Luigi Angelo Vaira
- Giordano Madeddu
Institutionen
- AOL (United States)(US)
- Ospedale San Paolo(IT)
- Ospedale dell' Angelo(IT)
- Ospedale Garibaldi(IT)
- University of Catania(IT)
- IRCCS Humanitas Research Hospital(IT)
- Humanitas University(IT)
- University of Lausanne(CH)
- Hôpital de Sion(CH)
- University of Modena and Reggio Emilia(IT)
- University of Sassari(IT)
- University of Padua(IT)
- University of Messina(IT)