OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.04.2026, 03:34

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Understanding and Addressing Bias in Artificial Intelligence Systems: A Primer for the Emergency Medicine Physician

2025·1 Zitationen·Journal of the American College of Emergency Physicians OpenOpen Access
Volltext beim Verlag öffnen

1

Zitationen

14

Autoren

2025

Jahr

Abstract

Artificial intelligence (AI) tools and technologies are increasingly being integrated into emergency medicine (EM) practice, not only offering potential benefits such as improved efficiency, better patient experience, and increased safety, but also resulting in potential risks including exacerbation of biases. These biases, inadvertently embedded in AI algorithms or training data, can adversely affect clinical decision making for diverse patient populations. Bias is a universal human attribute, subject to introduction into any human interaction. The risk with AI is magnification of, or even normalization of, patterns of biases across the health care ecosystem within tools that in time may be considered authoritative. This article, the work of members of the American College of Emergency Physicians (ACEP) AI Task Force, aims to equip emergency physicians (EPs) with a practical framework for understanding, identifying, and addressing bias in clinical and operational AI tools encountered in the emergency department (ED). For this publication, we have defined bias as a systematic flaw in a decision-making process that results in unfair or unintended outcomes that can be inadvertently embedded in AI algorithms or training data. This can result in adverse effects on clinical decision making for diverse patient populations. We begin by reviewing common sources of AI bias relevant to EM, including data, algorithmic, measurement, and human-interaction factors, and then, we discuss the potential pitfalls. Following this, we use illustrative examples from EM practice (eg, triage tools, risk stratification, and medical devices) to demonstrate how bias can manifest. We subsequently discuss the evolving regulatory landscape, structured assessment frameworks (including predeployment, continuous monitoring, and postdeployment steps), key principles (like sociotechnical perspectives and stakeholder engagement), and specific tools. Finally, this review outlines the EP's vital role in mitigation of AI-related biases through advocacy, local validation, clinical feedback, demanding transparency, and maintaining clinical judgment over automation.

Ähnliche Arbeiten