Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The ABCs of PEMs: Using Artificial Intelligence to Enhance the Readability of Patient Educational Materials in Pediatric Orthopaedics
0
Zitationen
6
Autoren
2025
Jahr
Abstract
Background While the AMA and NIH recommend patient educational materials (PEMs) be written at a 6th-grade reading level, studies consistently show that PEMs in orthopaedics are written at the 10th-grade level or higher. This mismatch disproportionately affects patients with limited health literacy, who are at increased risk for poor clinical outcomes. This study investigates the potential of artificial intelligence (AI) platforms, including ChatGPT and OpenEvidence, to generate PEMs in pediatric orthopaedics that reach readability standards without sacrificing clinical accuracy. Methods Fifty-one of the most common pediatric orthopedic conditions were selected using the AAOS OrthoInfo PEM database. For each condition, PEMs were generated using two AI platforms: ChatGPT-4 and Open Evidence utilizing a standardized prompt requesting a 6th grade level explanation that included relevant anatomy, symptoms, physical exam findings, and treatment options. Readability was assessed using eight validated readability metrics via the Python Textstat library. PEMs were scored for accuracy and completeness by four blinded, pediatric orthopedic surgeons. Interrater reliability was assessed using intraclass correlation coefficients (ICC), and statistical comparisons were performed using paired t-tests. Results ChatGPT-generated PEMs had the lowest average reading grade level (8.7) compared to OrthoInfo (10.8) and Open Evidence (10.1). OrthoInfo PEMs were rated highest for accuracy and completeness (Total Accuracy: 6.95; Total Completeness: 6.98), compared to Chat GPT (Total Accuracy: 6.15; Total Completeness: 5.90) and Open Evidence (Total Accuracy: 3.25; Total Completeness: 3.05), but ChatGPT approached OrthoInfo in several subdomains, including treatment descriptions, timeline, and follow-up recommendations. Conclusions This study demonstrates the promise of AI platforms in generating readable, patient-friendly educational materials in pediatric orthopedics. While OrthoInfo remains the gold standard in content accuracy and completeness, it falls short of national readability guidelines. AI tools like ChatGPT and OpenEvidence produced significantly more readable PEMs and, in some categories, approached the quality of expert-validated materials. These findings suggest a potential role for AI-assisted content creation in bridging the health literacy gap. However, concerns surrounding accuracy, hallucinations, and source transparency must be addressed before AI-generated PEMs can be safely integrated into clinical practice. Level of evidence IV
Ähnliche Arbeiten
Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES)
2004 · 6.125 Zit.
The content validity index: Are you sure you know what's being reported? critique and recommendations
2006 · 6.078 Zit.
Health literacy and public health: A systematic review and integration of definitions and models
2012 · 5.827 Zit.
Low Health Literacy and Health Outcomes: An Updated Systematic Review
2011 · 5.209 Zit.
Health literacy as a public health goal: a challenge for contemporary health education and communication strategies into the 21st century
2000 · 4.932 Zit.