Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mitigating Demographic Bias in ImageNet: A Comprehensive Analysis of Disparities and Fairness in Deep Learning Models
2
Zitationen
3
Autoren
2025
Jahr
Abstract
Deep learning has transformed artificial intelligence (AI), yet fairness concerns persist due to biases in training datasets. ImageNet, a key dataset in computer vision, contains demographic imbalances in its “person” categories, raising concerns about biased AI models. This study is to examine these biases, evaluate their impact on model performance, and implement fairness aware mitigation strategies. Using a fine-tuned EfficientNet-B0 model, we achieved 98.44% accuracy. Subgroup analysis revealed higher error rates for darker-skinned individuals and women compared to lighter-skinned individuals and men. Mitigation techniques, including data augmentation and re-sampling, improved fairness metrics by 1.4% for underrepresented groups. Confidence analysis showed 99.25% accuracy for predictions with over 80% confidence. To enhance reproducibility, we deployed our demographic bias detection model on Hugging Face Spaces. The study’s limitations include a focus on “person” categories, computational constraints, and potential annotation biases. Future research should extend fairness-aware interventions across diverse datasets.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.725 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.886 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.512 Zit.
Fairness through awareness
2012 · 3.302 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.202 Zit.