Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Analyzing patient perspectives with large language models: a cross-sectional study of sentiment and thematic classification on exception from informed consent
6
Zitationen
15
Autoren
2025
Jahr
Abstract
Large language models (LLMs) can improve text analysis efficiency in healthcare. This study explores the application of LLMs to analyze patient perspectives within the exception from informed consent (EFIC) process, which waives consent in emergency research. Our objective is to assess whether LLMs can analyze patient perspectives in EFIC interviews with performance comparable to human reviewers. We analyzed 102 EFIC community interviews from 9 sites, each with 46 questions, as part of the Pediatric Dose Optimization for Seizures in Emergency Medical Services study. We evaluated 5 LLMs, including GPT-4, to assess sentiment polarity on a 5-point scale and classify responses into predefined thematic classes. Three human reviewers conducted parallel analyses, with agreement measured by Cohen's Kappa and classification accuracy. Polarity scores between LLM and human reviewers showed substantial agreement (Cohen's kappa: 0.69, 95% CI 0.61-0.76), with major discrepancies in only 4.7% of responses. LLM achieved high thematic classification accuracy (0.868, 95% CI 0.853-0.881), comparable to inter-rater agreement among human reviewers (0.867, 95% CI 0.836-0.901). LLMs enabled large-scale visual analysis, comparing response statistics across sites, questions, and classes. LLMs efficiently analyzed patient perspectives in EFIC interviews, showing substantial sentiment assessment and thematic classification performance. However, occasional underperformance suggests LLMs should complement, not replace, human judgment. Future work should evaluate LLM integration in EFIC to enhance efficiency, reduce subjectivity, and support accurate patient perspective analysis.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.
Autoren
Institutionen
- University of California, San Francisco(US)
- San Francisco Public Library(US)
- Microsoft (United States)(US)
- Jacobs (United States)(US)
- University of Southern California(US)
- Children's Hospital of Los Angeles(US)
- University of Colorado Denver(US)
- Primary Children's Hospital(US)
- University of Utah(US)
- Oregon Health & Science University(US)
- Cincinnati Children's Hospital Medical Center(US)
- University of California, Davis(US)
- Nationwide Children's Hospital(US)
- Palo Alto University(US)
- Stanford University(US)
- Emory University(US)
- Children's Healthcare of Atlanta(US)
- George Washington University Hospital(US)
- University of Washington(US)