Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing ChatGPT-4 as a clinical decision support tool in neuro-oncology radiotherapy: a prospective comparative study
1
Zitationen
7
Autoren
2025
Jahr
Abstract
Large language models (LLMs) such as ChatGPT-4 have shown potential for medical decision support, but their reliability in specialized fields remains uncertain. This study aimed to evaluate ChatGPT-4’s performance as a clinical decision support tool in neuro-oncology radiotherapy by comparing its treatment recommendations for patients with central nervous system tumors against a multidisciplinary tumor board’s decisions, an independent specialist’s opinion, and published guidelines. We prospectively collected 101 neuro-oncology cases (May 2024–May 2025) presented at a tertiary-care tumor board. Key case details were entered into ChatGPT-4 with a standardized query asking whether to recommend radiotherapy and, if so, the target volumes and dose. The AI’s recommendations were recorded and compared to the tumor board’s consensus, a blinded radiation oncologist’s recommendation, and ESMO guideline indications when applicable. Concordance rates (percentage agreement) and Cohen’s kappa were calculated. Sensitivity and specificity were assessed using the reference decisions as ground truth. McNemar’s test was used to evaluate any bias in discordant recommendations. ChatGPT-4 matched the tumor board’s radiotherapy recommendations in 76% of cases (κ = 0.61). Agreement with the independent specialist was 79% (κ = 0.58). In 61 low-complexity cases with clear guidelines, ChatGPT-4 concurred with guideline-based indications in 76.7% of cases, missing some recommended treatments (sensitivity 73%, specificity 100%). In intermediate-complexity scenarios, concordance with the tumor board was 70.8%, with most discrepancies due to the AI recommending treatment that experts did not (sensitivity 85.7%, specificity 64.7%). In high-complexity cases, agreement was 90.9% (sensitivity 100%, specificity 83.3%). Overall, ChatGPT-4 showed an overtreatment bias, more often recommending radiotherapy when the human experts chose observation (p < 0.05 for AI vs. tumor board discordances). Its overall agreement (76%) was lower than that of the human specialist (90%). ChatGPT-4 can reproduce many expert radiotherapy decisions in neuro-oncology, reflecting substantial absorption of standard clinical practice. However, it cannot substitute for human judgment: the AI omitted some indicated treatments in straightforward cases and suggested unnecessary therapy in some borderline cases, indicating a lack of nuanced clinical reasoning. Careful human oversight is essential if such models are to be used for clinical decision support.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.