Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Understanding and Addressing Bias in Artificial Intelligence Systems: A Primer for the Emergency Medicine Physician
1
Zitationen
14
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) tools and technologies are increasingly being integrated into emergency medicine (EM) practice, not only offering potential benefits such as improved efficiency, better patient experience, and increased safety, but also resulting in potential risks including exacerbation of biases. These biases, inadvertently embedded in AI algorithms or training data, can adversely affect clinical decision making for diverse patient populations. Bias is a universal human attribute, subject to introduction into any human interaction. The risk with AI is magnification of, or even normalization of, patterns of biases across the health care ecosystem within tools that in time may be considered authoritative. This article, the work of members of the American College of Emergency Physicians (ACEP) AI Task Force, aims to equip emergency physicians (EPs) with a practical framework for understanding, identifying, and addressing bias in clinical and operational AI tools encountered in the emergency department (ED). For this publication, we have defined bias as a systematic flaw in a decision-making process that results in unfair or unintended outcomes that can be inadvertently embedded in AI algorithms or training data. This can result in adverse effects on clinical decision making for diverse patient populations. We begin by reviewing common sources of AI bias relevant to EM, including data, algorithmic, measurement, and human-interaction factors, and then, we discuss the potential pitfalls. Following this, we use illustrative examples from EM practice (eg, triage tools, risk stratification, and medical devices) to demonstrate how bias can manifest. We subsequently discuss the evolving regulatory landscape, structured assessment frameworks (including predeployment, continuous monitoring, and postdeployment steps), key principles (like sociotechnical perspectives and stakeholder engagement), and specific tools. Finally, this review outlines the EP's vital role in mitigation of AI-related biases through advocacy, local validation, clinical feedback, demanding transparency, and maintaining clinical judgment over automation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.349 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.219 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.631 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.480 Zit.
Autoren
Institutionen
- Mount Sinai Health System(US)
- Icahn School of Medicine at Mount Sinai(US)
- Rutgers, The State University of New Jersey(US)
- Carleton College(US)
- American College of Emergency Physicians(US)
- The University of Texas Southwestern Medical Center(US)
- University of Virginia(US)
- Inwood Community Services(US)
- Medical College of Wisconsin(US)
- New York University(US)
- Yale University(US)