OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 02.04.2026, 05:14

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The ABCs of PEMs: Using Artificial Intelligence to Enhance the Readability of Patient Educational Materials in Pediatric Orthopaedics

2025·0 Zitationen·UNC LibrariesOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

Background While the AMA and NIH recommend patient educational materials (PEMs) be written at a 6th-grade reading level, studies consistently show that PEMs in orthopaedics are written at the 10th-grade level or higher. This mismatch disproportionately affects patients with limited health literacy, who are at increased risk for poor clinical outcomes. This study investigates the potential of artificial intelligence (AI) platforms, including ChatGPT and OpenEvidence, to generate PEMs in pediatric orthopaedics that reach readability standards without sacrificing clinical accuracy. Methods Fifty-one of the most common pediatric orthopedic conditions were selected using the AAOS OrthoInfo PEM database. For each condition, PEMs were generated using two AI platforms: ChatGPT-4 and Open Evidence utilizing a standardized prompt requesting a 6th grade level explanation that included relevant anatomy, symptoms, physical exam findings, and treatment options. Readability was assessed using eight validated readability metrics via the Python Textstat library. PEMs were scored for accuracy and completeness by four blinded, pediatric orthopedic surgeons. Interrater reliability was assessed using intraclass correlation coefficients (ICC), and statistical comparisons were performed using paired t-tests. Results ChatGPT-generated PEMs had the lowest average reading grade level (8.7) compared to OrthoInfo (10.8) and Open Evidence (10.1). OrthoInfo PEMs were rated highest for accuracy and completeness (Total Accuracy: 6.95; Total Completeness: 6.98), compared to Chat GPT (Total Accuracy: 6.15; Total Completeness: 5.90) and Open Evidence (Total Accuracy: 3.25; Total Completeness: 3.05), but ChatGPT approached OrthoInfo in several subdomains, including treatment descriptions, timeline, and follow-up recommendations. Conclusions This study demonstrates the promise of AI platforms in generating readable, patient-friendly educational materials in pediatric orthopedics. While OrthoInfo remains the gold standard in content accuracy and completeness, it falls short of national readability guidelines. AI tools like ChatGPT and OpenEvidence produced significantly more readable PEMs and, in some categories, approached the quality of expert-validated materials. These findings suggest a potential role for AI-assisted content creation in bridging the health literacy gap. However, concerns surrounding accuracy, hallucinations, and source transparency must be addressed before AI-generated PEMs can be safely integrated into clinical practice. Level of evidence IV

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Health Literacy and Information AccessibilityArtificial Intelligence in Healthcare and EducationSocial Media in Health Education
Volltext beim Verlag öffnen