Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Hybrid Explainable AI Framework (HXAI) for Accurate and Interpretable Diagnosis of Alzheimer’s Disease
1
Zitationen
18
Autoren
2025
Jahr
Abstract
<b>Background/Objectives</b>: In clinical practice, Explainable AI (XAI) enables non-specialists and general practitioners to make precise diagnoses. Current XAI approaches are limited, as many rely solely on either presenting explanations of clinical data or presenting explanations of MRI, or presenting explanations in unclear ways, reducing their clinical utility. <b>Methods</b>: In this paper, we propose a novel Hybrid Explainable AI (HXAI) framework. This framework uniquely integrates both model-agnostic (SHAP) and model-specific (Grad-CAM) explanation methods within a unified structure for the diagnosis of Alzheimer's disease. The dual-layer explainability constitutes the main originality of this study, as it provides the possibility of interpreting quantitative (at the feature level) and spatial (at the region level) data within a single diagnostic framework. Clinical features (e.g., Mini-Mental State Examination (MMSE), normalized Whole Brain Volume (nWBV), Socioeconomic Status (SES), age) are combined with MRI-derived features extracted via ResNet50, and these features are integrated using ensemble learning with a logistic regression meta-model. <b>Results</b>: The corresponding validation reflects the explainability accuracy of these feature-based explanations, with removal-based tests achieving 83.61% explainability accuracy, confirming the importance of these features. Model-specific information was used to explain MRI predictions, achieving 58.16% explainability accuracy of visual explanations. <b>Conclusions</b>: Our HXAI framework integrates both model-agnostic and model-specific approaches in a structured manner, supported by quantitative metrics. This dual-layer interpretability enhances transparency, improves explainability accuracy, and provides an accurate and interpretable framework for AD diagnosis, bridging the gap between model accuracy and clinical trust.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.796 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.334 Zit.
"Why Should I Trust You?"
2016 · 14.607 Zit.
Generative adversarial networks
2020 · 13.214 Zit.
Autoren
- Fatima Hasan Al-bakri
- Wan Mohd Yaakob Wan Bejuri
- Mohammed Nasser Al-Andoli
- Raja Rina Raja Ikram
- Hui Min Khor
- Mohd Syafiq Mispan
- Norhazwani Md Yunos
- Noor Fazilla Abd Yusof
- Muhammad Hafidz Fazli Md Fauadi
- Abdul Syukor Mohamad Jaya
- Nor Aiza Moketar
- Noorrezam Yusop
- Kharismi Burhanudin
- Tyanita Puti Marindah Wardhani
- Anugrayani Bustamin
- Zahir Zainuddin
- Deasy Wahyuni
- Umi Kalsom Ariffin