Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing The Ability of ChatGPT 5.2 to Answer Patient Questions Regarding Hip Avascular Necrosis
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Objective: This study aimed to evaluate the quality and readability of Chat Generative Pretrained Transformer (ChatGPT) 5.2’s responses to frequently asked patient questions about hip avascular necrosis (AVN), a challenging condition often requiring clear communication and shared decision-making. Methods: Sixteen commonly asked patient questions regarding hip AVN were submitted to ChatGPT 5.2 without follow-up queries. Each response was independently evaluated by 2 orthopedic surgeons with over 20 years of experience in hip arthroplasty. The quality of responses was assessed using the grading system proposed by Mika et al. Readability was analyzed using the Flesch–Kincaid Reading Ease Score (FRES) and Flesch–Kincaid Reading Level (FKRL). Interrater reliability (IRR) was calculated using Cohen’s kappa test. Results: Reviewer 1 rated 11/16 responses as “excellent—no clarification required” and 5/16 as “satisfactory—minimal clarification needed.” Reviewer 2 rated 10/16 responses as excellent and 6/16 as satisfactory. The mean FRES score was 27.1 (range: 12.4-45.6), indicating the content was “difficult to read.” The FKRL scores corresponded to college or college graduate reading levels. The IRR between reviewers was moderate (κ = 0.59, 95% CI: 0.09-1.00). Conclusion: ChatGPT 5.2 provided overall satisfactory to excellent responses regarding hip AVN. However, the high reading level required to understand these answers may limit their effectiveness in patient education unless simplified language is employed. Cite this article as: Şahin E, Baltacı Ç, Kalem M, Kocaoğlu H. Assessing the ability of ChatGPT 5.2 to answer patient questions regarding hip avascular necrosis. Acta Orthop Traumatol Turc., 2026; 60(2), 0629, doi:10.5152/j.aott.2026.25629.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.