Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Unmasking Racial Disparities: Epistemic Injustice and Bias in Facial Recognition Systems
0
Zitationen
1
Autoren
2025
Jahr
Abstract
The widespread adoption of facial recognition technologies (FRT) worldwide has sparked major concerns over algorithmic bias and racial inequality. Various reports have highlighted that these systems demonstrate considerably higher error rates when identifying people from marginalised racial communities, particularly individuals with darker complexions. This study aims to critically examine the systemic racial bias embedded in FRT through a decolonial lens, exploring how algorithmic structures perpetuate historical inequalities and reinforce racial hierarchies. Using a qualitative literature review approach, this study integrates insights from 80 academic works published between 2015 and 2025. Data were collected through a systematic review of peer-reviewed articles, institutional reports, and critical essays accessed via Scopus, JSTOR, and Google Scholar. The analysis applied thematic coding to identify recurring patterns and systemic trends in the literature. Findings show that facial recognition systems consistently underperform on non-White faces; error rates have been reported as high as 34% for African American individuals, while remaining below 1% for White individuals. The issue stems not only from imbalanced training datasets but also from the limited diversity within AI development teams and the prevailing epistemologies that shape technological design. Moreover, deployment of FRT in law enforcement has disproportionately targeted minority communities, resulting in wrongful arrests and heightened surveillance. The study concludes that resolving this issue necessitates more than mere technical fixes; it calls for a fundamental redesign of underlying principles, data practices, and governance through a decolonial framework. Future research should explore intersectional approaches that integrate indigenous, racialised, and feminist insights during the construction of ethical AI frameworks.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.723 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.886 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.511 Zit.
Fairness through awareness
2012 · 3.302 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.201 Zit.