Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Use of explainable AI (xAI) in dementia detection and prognosis: a scoping review
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Dementia poses a significant global health challenge for both clinicians and patients, impacting millions of individuals worldwide, yet its early diagnosis remains underexplored. The current technology-driven dementia care solutions are revolutionising this landscape with state-of-the-art methodologies (such as Artificial Intelligence (AI)); however, due to its black-box nature, there is a need for Explainable AI (xAI) to help build trust and confidence among end-users, making it more suitable for real-world healthcare applications. This scoping review aims to provide a comprehensive overview of the current xAI usage in this field by synthesising data from studies published since 2014. Through a structured literature extraction process following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Review (PRISMA-ScR) guidelines, a total of 415 scientific papers were screened, resulting in 70 eligible articles for an in-depth analysis regarding dementia detection and prognosis using xAI techniques. Most studies relied on public datasets (e.g. ADNI) without clinical validation. A detailed thematic analysis presents the findings of this review, which identifies the most widely used tools, approaches, types of data, and the key limitations/challenges in implementing xAI for dementia detection and prognosis used in the latest research. These findings provide valuable insights and direction for future research in this field by highlighting the underutilization of multimodal data integration, persistent inconsistencies in feature importance rankings across methods, and the imprecision of visual explanations.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.657 Zit.
Generative Adversarial Nets
2023 · 19.894 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.315 Zit.
"Why Should I Trust You?"
2016 · 14.512 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.188 Zit.