Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Validation of an AI-powered mobile application for personalizing medical note explanations
1
Zitationen
1
Autoren
2025
Jahr
Abstract
Abstract Almost half of adults struggle to understand written health information, making medical communication a critical barrier to patient care. While AI shows promise for improving health communication, few tools have been rigorously validated for personalizing medical explanations. I developed Patiently AI, a mobile application that uses large language models to simplify medical notes with audience-specific adaptations (child, teenager, adult, carer) and tone variations (friendly, informative, reassuring). Our three-phase validation comprised: computational analysis of readability improvements across 210 AI-generated outputs using established metrics; expert evaluation by 15 healthcare professionals assessing clinical accuracy, safety, and communication quality; and patient survey of 54 participants evaluating preferences, comprehension, and acceptance. AI-generated explanations showed significant readability improvements: mean Flesch–Kincaid Grade Level decreased by 2.96 levels (10.57→7.61), Flesch Reading Ease increased by 31.9 points (37.7→69.6), and Gunning Fog Index decreased by 4.09 points (14.5→10.4). Improvements were greatest for younger audiences (child: 4.25 grade level reduction vs. adult: 1.80). Expert evaluation rated AI outputs highly for medical accuracy (4.49±0.51/5), clarity (4.53±0.50/5), and trustworthiness (4.37±0.58/5), with 87.3% deemed clinically safe. Patient evaluation showed strong acceptance: 70.0% preferred AI-generated explanations, with high ratings for clarity (4.58±0.52/5) and confidence in care (4.19±0.65/5). 70.4% of patients indicated likelihood to use the application. This study provides robust evidence that AI can safely and effectively personalize medical communication while maintaining clinical accuracy. The validated Patiently AI application represents a scalable solution for improving health literacy and patient engagement across diverse populations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.