Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI for Healthcare
0
Zitationen
4
Autoren
2025
Jahr
Abstract
With rapid advancements in artificial intelligence (AI), ranging from traditional machine learning techniques to sophisticated deep learning models, its integration into healthcare systems has accelerated significantly. These developments enable AI to assist in various critical healthcare applications, such as disease diagnosis, treatment planning, predictive analytics, personalised medicine, and medical imaging, which significantly enhance patient care and clinical outcomes. However, the opaque nature of many complex AI models poses a significant challenge, especially given the high-stakes nature of healthcare decisions where patient outcomes can be critically affected. Explainability is essential to mitigate these risks, as it provides transparency that not only aids healthcare professionals in making more informed and confident decisions but also builds trust among patients, allowing them to understand the reasoning behind AI-driven recommendations. This section introduces the foundational concepts of explainable AI (XAI) and highlights prominent methodologies for enhancing model interpretability across various domains. We provide a comprehensive analysis of XAI applications in healthcare, focusing on the unique requirements and challenges posed by different data modalities, including time-series data from monitoring devices, medical text from clinical records, medical images, and audio data such as heart or lung sounds. In conclusion, we discuss the current limitations and challenges of implementing XAI in healthcare settings, while also identifying promising future research directions that could drive further innovation and enhance the reliability of AI-powered healthcare solutions.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.562 Zit.
Generative Adversarial Nets
2023 · 19.892 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.298 Zit.
"Why Should I Trust You?"
2016 · 14.384 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.164 Zit.