OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.05.2026, 20:59

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Benchmarking the symptom-checking capabilities of ChatGPT for a broad range of diseases

2023·28 Zitationen·Journal of the American Medical Informatics AssociationOpen Access
Volltext beim Verlag öffnen

28

Zitationen

3

Autoren

2023

Jahr

Abstract

OBJECTIVE: This study evaluates ChatGPT's symptom-checking accuracy across a broad range of diseases using the Mayo Clinic Symptom Checker patient service as a benchmark. METHODS: We prompted ChatGPT with symptoms of 194 distinct diseases. By comparing its predictions with expectations, we calculated a relative comparative score (RCS) to gauge accuracy. RESULTS: ChatGPT's GPT-4 model achieved an average RCS of 78.8%, outperforming the GPT-3.5-turbo by 10.5%. Some specialties scored above 90%. DISCUSSION: The test set, although extensive, was not exhaustive. Future studies should include a more comprehensive disease spectrum. CONCLUSION: ChatGPT exhibits high accuracy in symptom checking for a broad range of diseases, showcasing its potential as a medical training tool in learning health systems to enhance care quality and address health disparities.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen