Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI in healthcare: Interpretable deep learning models for disease diagnosis
2
Zitationen
1
Autoren
2019
Jahr
Abstract
The rapid integration of Artificial Intelligence (AI) into healthcare systems has brought forth unprecedented advancements in disease diagnosis. Among the various AI paradigms, Explainable AI (XAI) has emerged as a critical component, ensuring not only high predictive accuracy but also providing transparent and understandable insights into decision-making processes. This review paper aims to comprehensively explore the application of Interpretable Deep Learning Models (IDLMs) in healthcare, focusing specifically on disease diagnosis. The first section of this paper delves into the growing importance of AI in healthcare and the inherent challenges associated with the black-box nature of traditional deep learning models. As the demand for reliable and interpretable decision support systems in healthcare intensifies, the need for models that can elucidate their decision rationale becomes imperative. In response to this demand, a multitude of IDLMs have been developed, incorporating transparency and interpretability into their architectures. The subsequent sections provide an in-depth analysis of various IDLMs utilized in disease diagnosis, with a particular emphasis on their interpretability mechanisms. Noteworthy models such as LIME (Local Interpretable Model-agnostic Explanations), SHAP (SHapley Additive explanations) and attention-based architectures are explored, elucidating their roles in rendering complex deep learning models interpretable. Case studies and empirical evidence are presented to underscore the practical significance of these models in improving diagnostic accuracy and fostering trust between healthcare practitioners and AI systems. Furthermore, the paper discusses the ethical considerations and regulatory aspects surrounding the deployment of IDLMs in healthcare settings. Issues related to bias, fairness, and accountability are addressed, emphasizing the importance of responsible AI practices in the context of patient care.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.463 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.259 Zit.
"Why Should I Trust You?"
2016 · 14.314 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.138 Zit.