Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Abstract 4367611: Large Language Model (LLM) Applications for Assessing Patient Health Communication Proficiency
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Introduction: Health communication ability is a potent predictor of patient outcomes; assessing this ability is crucial for ensuring clinical consultations, health education, and literacy initiatives are effective. However, existing assessment methods lack scalability, consistency, and objectivity, presenting a need for a consistent, quantifiable metric for healthcare providers and clinical researchers to rigorously assess the health communication proficiency of patients. Research Hypothesis: A Large Language Model (LLM)-driven Communication in Health Assessment Tool (CHAT) can be developed using a standardized rubric-based approach to quantify patient communication proficiency. Methods/Approach: An LLM rubric-based assessment tool was designed using U.S. CDC metrics for clear communication. Assessment parameters included clarity of language, lexical diversity, conciseness and completeness, engagement with health information, and overall health literacy, reflecting patient communication proficiency and efficacy. The rubric was recursively optimized using novel synthetic transcript generation and prompt engineering approaches. Final validation was conducted over open-source, patient-doctor transcript databases from the U.S. Department of Veteran Affairs. Results/Data: A consistent rubric-based input was developed with scores ranging from 1-4 in each of the five communication parameters (20 total points). Variability in score output was assessed by calculating the average standard deviation (SD) across 50 independent evaluations of n=20 fixed synthetic transcript (Figure 1), yielding distributions with an average standard deviation of 0.14 (SD=0.007). This confirms the ability of the model to consistently interpret, apply, and score the rubric-based input. Next, the model was assessed across each rubric parameter, plotting the average SD across 100 independent gradings of n=31 real patient-doctor transcripts (Figure 2). This resulted in an average SD of 0.39 (SD=0.02) for each category parameter and an overall score SD=1.23, presenting high category-wise grading consistency with real transcripts. Conclusions: CHAT provides a rigorous, scalable, and objective mechanism for evaluating patient communication skills. Its standardized scoring allows educators, healthcare providers, and researchers to quantify improvements in health literacy interventions and assess clinical communication objectively, enhancing personalized healthcare education and practice into the future.
Ähnliche Arbeiten
Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES)
2004 · 6.119 Zit.
The content validity index: Are you sure you know what's being reported? critique and recommendations
2006 · 6.070 Zit.
Health literacy and public health: A systematic review and integration of definitions and models
2012 · 5.819 Zit.
Low Health Literacy and Health Outcomes: An Updated Systematic Review
2011 · 5.205 Zit.
Health literacy as a public health goal: a challenge for contemporary health education and communication strategies into the 21st century
2000 · 4.931 Zit.