Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI-assisted treatment decisions for femoral neck fractures: a simulated assessment of ChatGPT-4’s accuracy and comprehensiveness
0
Zitationen
11
Autoren
2025
Jahr
Abstract
As technology advances, medical knowledge has become more democratized and accessible.An increasing number of patients now rely on search engines and AI chatbots for medical information [13,14], a trend that reduces dependence on traditional healthcare systems and sometimes even challenges physicians'recommendations.ChatGPT-4 is not specifically designed for healthcare and lacks domain-specific training [15,16].The illusion that ChatGPT-4 can generate seemingly reasonable but evidence-based treatment suggestions is particularly dangerous in specialized decision-making [17].Existing studies have raised concerns about its accuracy in triage and diagnosis [9,18].Although AI holds substantial potential in healthcare, integrating it into clinical decision support systems, especially for decisions regarding FNF treatment, remains insufficiently studied.ChatGPT-4 can process vast amounts of data, clinical parameters, and demographic information to provide personalized treatment recommendations [19,20].Currently, research on the application of AI in FNF treatment decisions is limited, with a lack of evaluation reports on the accuracy of ChatGPT-4 recommendations in this field.This study aims to explore this issue.Previous research primarily focused on single-option and closed-ended questions, failing to address the needs of individualized treatment decisions [21,22].In this study, orthopedic surgeons from various medical centers were recruited to conduct a preliminary assessment of ChatGPT-4 accuracy and comprehensiveness in responding to medical queries from simulated patients with FNF.Additionally, our study provides subjective, open-ended explanations to inform treatment decisions for FNF.Professional physicians rated the treatment recommendations and explanations generated by ChatGPT-4, allowing for an analysis of its limitations in producing medical information.This evaluation provides preliminary evidence for the reliability of ChatGPT-4 in clinical fracture treatment.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.357 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.221 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.640 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.482 Zit.