Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating Large Language Models for Colonoscopy Preparation Assistance: Correctness and Diversity in Synthetic Dialogues
0
Zitationen
7
Autoren
2025
Jahr
Abstract
A bstract Background Colorectal cancer is the third leading cause of cancer-related deaths in the United States, and colonoscopy remains the gold standard for early detection and prevention. However, many procedures are postponed due to inadequate bowel preparation, a preventable failure often caused by patients’ difficulty in understanding or following written prep instructions. Prior interventions such as reminder apps and instructional videos have improved adherence only modestly, largely because they cannot answer patients’ specific questions. Recent advances in large language models (LLMs) raise the possibility of developing conversational assistants that can provide an interactive support to patients in procedure preparation. Objective This study evaluated correctness and diversity of synthetic dialogues generated by leading LLMs acting as both simulated AI Coaches and patients for colonoscopy preparation. Methods Four leading LLMs, OpenAI’s o3 and GPT-4.1, Meta’s Llama 3.3 70B, and Mistral’s Large-2411 were used to generate 250 patient-AI Coach dialogues per model. Prompts were designed to elicit diverse patient questions about diet, medications, and other prep-related topics. Human raters, including medical experts, evaluated responses for correctness, error type, and potential harmfulness. Automatic evaluation using an LLM-as-a-judge approach complemented human evaluation. Results Leading models approached but did not achieve adequate performance. Closed-weight models (GPT-4.1, o3) outperformed open-weight models (Llama, Mistral) on correctness, while multi-prompt generation substantially improved question diversity. All models produced harmful errors, primarily due to omissions or misinterpretations of prep instructions. Conclusions While LLMs demonstrate strong potential for colonoscopy preparation support, none are yet reliable enough for unsupervised deployment in patient-facing contexts without effective safety layers.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.