Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The causal transparency framework: a multi-metric approach to algorithmic accountability
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Algorithmic systems increasingly determine high-stakes outcomes in healthcare and criminal justice, yet accountability approaches focused on predictive performance often fail to distinguish genuine causal drivers from spurious proxies. Conventional explainability methods identify predictive features but rarely clarify why decisions arise, potentially obscuring the indirect influence of protected attributes through intermediate mediators. We introduce the Causal Transparency Framework (CTF), a theory-based auditing approach that evaluates alignment between model decision logic and literature-derived causal structures. CTF compares model behavior against domain-informed reference graphs to generate hypotheses about potential mechanism divergence warranting further investigation. CTF operationalizes transparency through four complementary metrics: Causal Influence Index (CII) for theory-model alignment, Causal Complexity Measure (CCM) for structural complexity, Transparency Entropy (TE) for decision certainty, and Counterfactual Stability (CS) for intervention robustness. We evaluate CTF on COMPAS and MIMIC-III datasets across four model families using strict data partitioning to minimize methodological circularity. Our analysis reveals three key findings. First, a complexity tax emerges in sociodemographic prediction: non-linear models increase inferred structural complexity more than seven-fold compared to logistic regression without meaningful discriminative gains (AUC ≈ 0.73). Second, standard explainers (SHAP/LIME) concentrate attribution on proximate mediators; CTF flags divergences between model-implied pathways and theory-specified structures that may indicate masked demographic influence. Third, in mortality prediction, CTF prioritizes actionable physiological markers over immutable demographics. CTF provides a technical scaffold for mechanism-aware, theory-grounded auditing generating accountable hypotheses rather than validating causal claims.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.488 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.263 Zit.
"Why Should I Trust You?"
2016 · 14.333 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.147 Zit.