Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Uncovering algorithmic inequity: a conditional mutual information framework for detecting and mitigating hidden discrimination
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Abstract Machine learning (ML) systems are increasingly embedded in automated decision-making processes across sectors such as healthcare, employment, and education. Traditional fairness metrics are typically used to assess equality across broad demographic categories but cannot capture nuanced and intersectional patterns of discrimination. Thus, this paper introduces a novel methodological framework that combines multialgorithm clustering with conditional mutual information (CMI) to detect hidden subgroup-level discrimination in ML systems. By analysing real-world datasets, we uncover statistically significant discriminatory patterns that disproportionately affect multiple marginalized individuals, particularly at the intersection of protected attributes. Additionally, we evaluate mitigation strategies, such as fairness-aware representation learning, that reduce bias while maintaining predictive accuracy. The findings have two major implications: (1) they highlight the inadequacy of surface-level fairness checks in complex sociotechnical systems, and (2) they offer actionable tools and insights for developers, managers, and policymakers seeking to audit and regulate ML technologies responsibly. Our study contributes to a growing body of work on the societal impact of algorithmic technologies and advances the methodological toolkit for equitable technology governance.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.711 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.884 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.506 Zit.
Fairness through awareness
2012 · 3.301 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.193 Zit.