OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 04.05.2026, 18:29

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Quantifying Explanation Disagreement Between SHAP and LIME Across Tabular Classification Models

2026·0 Zitationen·International Journal of Computer Sciences and EngineeringOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

Explainable artificial intelligence (XAI) methods such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) have become essential tools for interpreting machine learning predictions. Despite their distinct theoretical foundations, practitioners often use these methods interchangeably, assuming convergent outputs. This study systematically quantifies disagreement between SHAP and LIME explanations across three benchmark tabular datasets (Adult Income, German Credit, and Bank Marketing) and three classification models (Logistic Regression, Random Forest, and XGBoost). The methodology introduces complementary disagreement metrics: Explanation Sign Conflict Rate (ESCR) measuring directional attribution disagreement, Weighted Rank Divergence (WRD) capturing importance-weighted ranking differences, and Explanation Entropy quantifying attribution distribution characteristics. Experimental results reveal substantial disagreement, with Kendall tau rank correlations ranging from 0.006 to 0.375 across configurations. Linear models demonstrate consistently higher agreement (mean tau = 0.309) compared to ensemble models (mean tau = 0.158 for Random Forest, 0.153 for XGBoost). LIME stability analysis confirms that observed disagreements reflect genuine methodological differences rather than stochastic noise, with variance below 0.01 across all configurations. Disagreement between methods does not indicate which explanation is correct; rather, it serves as a diagnostic signal revealing where explanations should be interpreted with caution. These findings provide practitioners with quantitative benchmarks for expected explanation divergence.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationComputational and Text Analysis Methods
Volltext beim Verlag öffnen