Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Readability assessment of ChatGPT 5.0 responses are more complex for anterior cruciate ligament reconstruction compared to American Academy of Orthopaedic Surgeons’ OrthoInfo
0
Zitationen
8
Autoren
2026
Jahr
Abstract
INTRODUCTION: ChatGPT shows promise as a search tool and a source of patient information. This study aims to evaluate the readability of information about anterior cruciate ligament reconstruction (ACLR) available through ChatGPT 5.0 and compare it with the readability of information provided by the American Academy of Orthopaedic Surgeons (AAOS). METHODS: ACLR was chosen due to its extensive coverage on the AAOS OrthoInfo website. The same subsection formats found on the AAOS site were used to query ChatGPT 5.0. The information gathered from both AAOS and ChatGPT 5.0 was analyzed for readability using various stablished tests: Coleman-Liau, Flesch-Kincaid, Flesch Reading Ease Index, FORCAST Readability Formula, Fry Graph, Gunning Fog Index, Raygor Readability Estimate, and the Simple Measure of Gobbledygook Readability Formula. DISCUSSION: The analysis showed that the average reading grade level for ACL reconstruction information on the AAOS OrthoInfo website was 10.2 ± 1.2, suitable for a high school sophomore. The average reading ease score was 56.9 ± 14.2, categorized as "fairly difficult." In contrast, the average reading grade level for ChatGPT's ACL reconstruction information was 12.9 ± 1.6, indicating a college-level reading requirement, with a reading ease score of 38.1 ± 4.1, falling in the "difficult" category. There was a statistically significant difference (p < 0.01), Cohen's d = 1.91, in both reading grade level and reading ease between the AAOS and ChatGPT sources. CONCLUSION: This study demonstrates that the readability of ChatGPT 5.0-generated information regarding ACLR is higher than that found on the AAOS OrthoInfo website, requiring a higher level of education for comprehension. Clarity and completeness are both critical elements of a tool being used by patients for educational purposes; while the information may be readily available, it currently demonstrates poor readability for patients, which may contribute to decisional conflicts and the development of excessive patient concern. LEVEL OF EVIDENCE: IV.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.560 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.451 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.948 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.797 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.