Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Large language model as clinical decision support system augments medication safety in 16 clinical specialties
8
Zitationen
19
Autoren
2025
Jahr
Abstract
Large language models (LLMs) have emerged as tools to support healthcare delivery, from automating tasks to aiding clinical decision-making. This study evaluated LLMs as alternative to rule-based alert systems, focusing on their ability to identify prescribing errors. This was designed as a prospective, cross-over, open-label study involving 91 error scenarios based on 40 clinical vignettes across 16 medical and surgical specialties. We developed and validated five LLM models using a retrieval-augmented generation framework. The best-performing model evaluated three different implementation strategies: LLM-based clinical decision support system (CDSS) alone, pharmacist plus LLM-based CDSS (co-pilot), and pharmacist alone. The co-pilot arm demonstrated the best performance with an accuracy of 61% (precision 0.57, recall 0.61, and F1 0.59). In detecting errors posing serious harm, the co-pilot mode increased accuracy by 1.5-fold over the pharmacist alone. Effective LLM integration for complex tasks like medication chart reviews can enhance healthcare professional performance, improving patient safety.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.