Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Research trends and ethical perspectives on explainable artificial intelligence in emergency medicine: a bibliometric analysis
0
Zitationen
1
Autoren
2026
Jahr
Abstract
BACKGROUND: Explainable artificial intelligence (XAI) has become increasingly relevant for ensuring transparency, interpretability, and trust in clinical decision support systems. In emergency medicine, where decision-making is time-critical and data are often incomplete, XAI provides significant opportunities while also raising ethical and methodological challenges. Despite the rapid growth of AI applications in acute care, bibliometric studies explicitly integrating explainability and ethics remain limited. METHODS: A bibliometric analysis of 433 publications on XAI in emergency medicine was conducted using the Web of Science Core Collection. The search covered 1986 through November 2025 and included peer-reviewed research articles and reviews in English related to emergency medicine, artificial intelligence, explainability, and ethics. Bibliometric indicators (publication trends, citation counts, journals, authors, and countries) were analyzed using Bibliometrix (R), while VOSviewer was used to visualize thematic clusters and keyword co-occurrence. Citations were analyzed as cumulative counts up to November 2025 and normalized to per-publication counts per year. RESULTS: Research output increased sharply after 2018, peaking in 2023 with approximately 90 publications, reflecting the growing focus on interpretability and transparency in emergency care. Cumulative citations exceeded 1,400 by 2025. The United States, the United Kingdom, and China were the most productive countries. Annals of Emergency Medicine, NPJ Digital Medicine, and BMJ Open were the most influential journals, while Ong M.E.H., Dwivedi G., Stewart J., Wang Y., and Li J. emerged as leading contributors. Thematic mapping revealed four major clusters: (1) methodological development of interpretable models, (2) clinical applications in triage, imaging, and sepsis risk prediction, (3) ethical and human-factor dimensions (bias, accountability, transparency), and (4) emerging topics such as large language models. Despite rapid progress, most studies remained retrospective and lacked standardized interpretability metrics, multicenter validation, and consistent reporting of explainability outputs. CONCLUSION: Research on XAI in emergency medicine is expanding rapidly and is increasingly shaped by a small group of influential journals and authors. However, critical gaps remain, including the limited availability of prospective studies, insufficient clinician involvement, and ethical frameworks that are not yet fully tailored to emergency settings. Addressing these gaps through multidisciplinary collaboration, standardized evaluation metrics, and stronger governance will be important to support transparency, accountability, and the safe clinical adoption of XAI in emergency medicine.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.539 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.426 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.921 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.586 Zit.