Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing Risk in Implementing New Artificial Intelligence Triage Tools—How Much Risk is Reasonable in an Already Risky World?
10
Zitationen
11
Autoren
2025
Jahr
Abstract
Risk prediction in emergency medicine (EM) holds unique challenges due to issues surrounding urgency, blurry research-practise distinctions, and the high-pressure environment in emergency departments (ED). Artificial intelligence (AI) risk prediction tools have been developed with the aim of streamlining triaging processes and mitigating perennial issues affecting EDs globally, such as overcrowding and delays. The implementation of these tools is complicated by the potential risks associated with over-triage and under-triage, untraceable false positives, as well as the potential for the biases of healthcare professionals toward technology leading to the incorrect usage of such tools. This paper explores risk surrounding these issues in an analysis of a case study involving a machine learning triage tool called the Score for Emergency Risk Prediction (SERP) in Singapore. This tool is used for estimating mortality risk in presentation at the ED. After two successful retrospective studies demonstrating SERP's strong predictive accuracy, researchers decided that the pre-implementation randomised controlled trial (RCT) would not be feasible due to how the tool interacts with clinical judgement, complicating the blinded arm of the trial. This led them to consider other methods of testing SERP's real-world capabilities, such as ongoing-evaluation type studies. We discuss the outcomes of a risk-benefit analysis to argue that the proposed implementation strategy is ethically appropriate and aligns with improvement-focused and systemic approaches to implementation, especially the learning health systems framework (LHS) to ensure safety, efficacy, and ongoing learning.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.
Autoren
Institutionen
- National University of Singapore(SG)
- University of Oxford(GB)
- University of Otago(NZ)
- University of Wollongong(AU)
- Agency for Science, Technology and Research(SG)
- Institute for Infocomm Research(SG)
- The University of Sydney(AU)
- Singapore General Hospital(SG)
- Duke-NUS Medical School(SG)
- University of Antwerp(BE)