Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable and reproducible AI: culturally responsive AI for health equity in minoritized groups
0
Zitationen
10
Autoren
2026
Jahr
Abstract
Artificial intelligence (AI) is transforming healthcare by enabling advanced diagnostics, personalized treatments, and improved operational efficiencies. By identifying complex data patterns and correlations, AI could supplement clinical decision-making, enabling more rapid diagnoses and treatment decisions tailored to meet the unique needs of diverse communities. However, realizing these benefits requires that clinical AI models be consistent, reliable, and validated across diverse populations and clinical environments. In addition, as these data patterns and correlations may often be unexpected, AI models require more explainability compared to other medical technologies. This is especially true for complex models, where the processes driving a model to make a prediction are often unclear and uninterpretable to both model developers and medical professionals, resulting in AI models frequently being described as "black boxes". To address this fundamental challenge of interpretability, explainable AI (XAI) has emerged as a critical approach, providing insight (often in a <i>post-hoc</i> manner) into why models generate their given output. Studies have shown that most physicians prefer XAI to non-explainable AI. This commentary therefore explores key considerations needed to ensure that AI promotes health equity in marginalized communities, building on similar shifts toward anticipatory health action that have been explored in humanitarian and climate AI contexts (8, 9). We argue that equity in AI depends on embedding explainability and reproducibility within culturally responsive frameworks that address historical and structural bias.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.549 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.443 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.941 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.