Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT for Therapy Conception of Colorectal Cancer: Can Artificial Intelligence Complement a Traditional Tumor Board?
0
Zitationen
6
Autoren
2025
Jahr
Abstract
Abstract Background Although multidisciplinary tumor boards (MDT) represent the gold standard for decision-making in cancer treatment, they require significant resources and may be susceptible to human bias. Artificial intelligence (AI), particularly large language models such as ChatGPT, has the potential to enhance or optimize the decision-making processes. The present study examines the potential for integrating AI into clinical practice by comparing MDT decisions with those generated by ChatGPT. Aims The aim of this study is to evaluate the concordance between the therapeutic recommendations proposed by a MDT and those generated by a large language model (ChatGPT) for colorectal cancer. Methods A retrospective, monocentric comparative study was conducted involving consecutive patients with newly diagnosed colorectal cancer discussed at our MDT. The pre-therapeutic and post-therapeutic MDT recommendations were compared with those of ChatGPT-4 in respect of concordance. Results In the pre-therapeutic discussions, complete concordance was observed in 72.5% cases, with partial concordance in 10.2% and discordance in 17.3%. For post-therapeutic discussions, the concordance increased to 82.8%. 11.8% of decisions displayed partial concordance, and 5.4% demonstrated discordance. It is noteworthy that discordance was more frequent in patients > 77 years and with ASA ≥ III. Conclusion There is a substantial concordance between the recommendations generated by ChatGPT and those provided by traditional MDT, indicating the potential utility of AI in supporting clinical decision-making for colorectal cancer management.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.479 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.364 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.814 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.543 Zit.