Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainability in Deep Learning in Healthcare and Medicine: Panacea or Pandora’s Box? A Systemic View
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Explainability in deep learning (XDL) for healthcare is increasingly portrayed as essential for addressing the “black box” problem in clinical artificial intelligence. However, this universal transparency mandate may create unintended consequences, including cognitive overload, spurious confidence, and workflow disruption. This paper examines a fundamental question: Is explainability a panacea that resolves AI’s trust deficit, or a Pandora’s box that introduces new risks? Drawing on general systems theory we demonstrate that the answer is profoundly context dependent. Through systemic analysis of current XDL methods, Saliency Maps, LIME, SHAP, and attention mechanisms, we reveal systematic disconnects between technical transparency and clinical utility. This paper argues that XDL is a context-dependent systemic property rather than a universal requirement. It functions as a panacea when proportionately applied to high-stakes reasoning tasks (cancer treatment planning, complex diagnosis) within integrated socio-technical architectures. Conversely, it becomes a Pandora’s box when superficially imposed on routine operational functions (scheduling, preprocessing) or time-critical emergencies (e.g., cardiac arrest) where comprehensive explanation delays lifesaving intervention. The paper proposes a risk-stratified framework recognizing that a specific subset of healthcare AI applications—those involving high-stakes clinical reasoning—require comprehensive explainability, while other applications benefit from calibrated transparency appropriate to their clinical context. We conclude that explainability is neither a cure-all nor an inevitable harm, but rather a dynamic equilibrium requiring continuous rebalancing across technical, cognitive, and organizational dimensions.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.796 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.334 Zit.
"Why Should I Trust You?"
2016 · 14.607 Zit.
Generative adversarial networks
2020 · 13.215 Zit.