OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.04.2026, 13:39

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Choose your explanation: a comparison of SHAP and Grad-CAM in human activity recognition

2025·2 Zitationen·Applied IntelligenceOpen Access
Volltext beim Verlag öffnen

2

Zitationen

5

Autoren

2025

Jahr

Abstract

Abstract Explaining machine learning (ML) models using eXplainable AI (XAI) techniques has become essential to make them more transparent and trustworthy. This is especially important in high-risk environments like healthcare, where understanding model decisions is critical to ensure ethical, sound, and trustworthy outcome predictions. However, users are often confused about which explanability method to choose for their specific use case. We present a comparative analysis of two explainability methods, Shapley Additive Explanations (SHAP) and Gradient-weighted Class Activation Mapping (Grad-CAM), within the domain of human activity recognition (HAR) utilizing graph convolutional networks (GCNs). By evaluating these methods on skeleton-based input representation from two real-world datasets, including a healthcare-critical cerebral palsy (CP) case, this study provides vital insights into both approaches’ strengths, limitations, and differences, offering a roadmap for selecting the most appropriate explanation method based on specific models and applications. We qualitatively and quantitatively compare the two methods, focusing on feature importance ranking and model sensitivity through perturbation experiments. While SHAP provides detailed input feature attribution, Grad-CAM delivers faster, spatially oriented explanations, making both methods complementary depending on the application’s requirements. Given the importance of XAI in enhancing trust and transparency in ML models, particularly in sensitive environments like healthcare, our research demonstrates how SHAP and Grad-CAM could complement each other to provide model explanations.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Machine Learning in HealthcareArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen