Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating the AI Potential as a Safety Net for Diagnosis: A Novel Benchmark of Large Language Models in Correcting Diagnostic Errors
0
Zitationen
12
Autoren
2026
Jahr
Abstract
Abstract Background Diagnostic errors are a leading cause of preventable patient harm, often occurring during early clinical encounters where diagnostic uncertainty is maximal. Large language models (LLMs) have shown potential in medical reasoning, yet their ability to function as a diagnostic safety net, specifically by identifying and correcting human diagnostic errors, remains systematically unquantified. We evaluated whether state-of-the-art LLMs can effectively challenge, rather than merely confirm, an erroneous physician diagnosis. Methods We evaluated 16 leading LLMs (including GPT-o1, Gemini 2.5 Pro, and Claude 3.7 Sonnet) using 200 standardized clinical vignettes representing 20 high-stakes, frequently misdiagnosed conditions. Models were presented with the full clinical record and an incorrect physician diagnosis. Primary outcomes included the diagnostic correction rate (disagreeing with the error and providing the correct diagnosis) and the ratio of correction to error detection. We further tested model robustness by generating 2,200 variants to assess the influence of demographic (race/ethnicity) and contextual (institutional reputation, training level, insurance) tokens. Results Diagnostic correction rates varied significantly across models. Gemini 2.5 Pro demonstrated the highest performance, correcting the physician’s error in 55.0% of cases (n=110/200), followed by Claude Sonnet 3.5 (48.5%) and Sonnet 4 (47.0%). In contrast, DeepSeek V3 corrected only 20.0% of cases. Performance was strikingly consistent at the disease level; most models failed to correct errors in syphilis, spinal epidural abscess, and myocardial infarction. Furthermore, several models exhibited confirmation bias (agreeing with the incorrect diagnosis) occurring in 11.0% to 50.0% of cases. Stability across demographic and contextual variants was inconsistent, with some models showing spurious performance shifts based on non-clinical tokens. Conclusion While top-performing LLMs can intercept approximately half of the human diagnostic errors in high-stakes scenarios, performance is heterogeneous and highly sensitive to non-clinical context. Current models exhibit significant disease-specific gaps and a tendency toward confirmation bias, suggesting that their safe clinical integration requires adversarial, multi-agent workflows designed to prioritize skepticism over baseline agreement.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.549 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.443 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.941 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.