OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 02.04.2026, 18:59

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The causal transparency framework: a multi-metric approach to algorithmic accountability

2026·0 Zitationen·AI and EthicsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2026

Jahr

Abstract

Algorithmic systems increasingly determine high-stakes outcomes in healthcare and criminal justice, yet accountability approaches focused on predictive performance often fail to distinguish genuine causal drivers from spurious proxies. Conventional explainability methods identify predictive features but rarely clarify why decisions arise, potentially obscuring the indirect influence of protected attributes through intermediate mediators. We introduce the Causal Transparency Framework (CTF), a theory-based auditing approach that evaluates alignment between model decision logic and literature-derived causal structures. CTF compares model behavior against domain-informed reference graphs to generate hypotheses about potential mechanism divergence warranting further investigation. CTF operationalizes transparency through four complementary metrics: Causal Influence Index (CII) for theory-model alignment, Causal Complexity Measure (CCM) for structural complexity, Transparency Entropy (TE) for decision certainty, and Counterfactual Stability (CS) for intervention robustness. We evaluate CTF on COMPAS and MIMIC-III datasets across four model families using strict data partitioning to minimize methodological circularity. Our analysis reveals three key findings. First, a complexity tax emerges in sociodemographic prediction: non-linear models increase inferred structural complexity more than seven-fold compared to logistic regression without meaningful discriminative gains (AUC ≈ 0.73). Second, standard explainers (SHAP/LIME) concentrate attribution on proximate mediators; CTF flags divergences between model-implied pathways and theory-specified structures that may indicate masked demographic influence. Third, in mortality prediction, CTF prioritizes actionable physiological markers over immutable demographics. CTF provides a technical scaffold for mechanism-aware, theory-grounded auditing generating accountable hypotheses rather than validating causal claims.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AI
Volltext beim Verlag öffnen