OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.04.2026, 21:01

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Use of explainable AI (xAI) in dementia detection and prognosis: a scoping review

2026·0 Zitationen·BMC Medical Informatics and Decision MakingOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

Dementia poses a significant global health challenge for both clinicians and patients, impacting millions of individuals worldwide, yet its early diagnosis remains underexplored. The current technology-driven dementia care solutions are revolutionising this landscape with state-of-the-art methodologies (such as Artificial Intelligence (AI)); however, due to its black-box nature, there is a need for Explainable AI (xAI) to help build trust and confidence among end-users, making it more suitable for real-world healthcare applications. This scoping review aims to provide a comprehensive overview of the current xAI usage in this field by synthesising data from studies published since 2014. Through a structured literature extraction process following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Review (PRISMA-ScR) guidelines, a total of 415 scientific papers were screened, resulting in 70 eligible articles for an in-depth analysis regarding dementia detection and prognosis using xAI techniques. Most studies relied on public datasets (e.g. ADNI) without clinical validation. A detailed thematic analysis presents the findings of this review, which identifies the most widely used tools, approaches, types of data, and the key limitations/challenges in implementing xAI for dementia detection and prognosis used in the latest research. These findings provide valuable insights and direction for future research in this field by highlighting the underutilization of multimodal data integration, persistent inconsistencies in feature importance rankings across methods, and the imprecision of visual explanations.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen