Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Quantifying Explanation Disagreement Between SHAP and LIME Across Tabular Classification Models
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Explainable artificial intelligence (XAI) methods such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) have become essential tools for interpreting machine learning predictions. Despite their distinct theoretical foundations, practitioners often use these methods interchangeably, assuming convergent outputs. This study systematically quantifies disagreement between SHAP and LIME explanations across three benchmark tabular datasets (Adult Income, German Credit, and Bank Marketing) and three classification models (Logistic Regression, Random Forest, and XGBoost). The methodology introduces complementary disagreement metrics: Explanation Sign Conflict Rate (ESCR) measuring directional attribution disagreement, Weighted Rank Divergence (WRD) capturing importance-weighted ranking differences, and Explanation Entropy quantifying attribution distribution characteristics. Experimental results reveal substantial disagreement, with Kendall tau rank correlations ranging from 0.006 to 0.375 across configurations. Linear models demonstrate consistently higher agreement (mean tau = 0.309) compared to ensemble models (mean tau = 0.158 for Random Forest, 0.153 for XGBoost). LIME stability analysis confirms that observed disagreements reflect genuine methodological differences rather than stochastic noise, with variance below 0.01 across all configurations. Disagreement between methods does not indicate which explanation is correct; rather, it serves as a diagnostic signal revealing where explanations should be interpreted with caution. These findings provide practitioners with quantitative benchmarks for expected explanation divergence.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.811 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.336 Zit.
"Why Should I Trust You?"
2016 · 14.615 Zit.
Generative adversarial networks
2020 · 13.228 Zit.