Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Navigating fairness: introducing the multidimensional AIM-FAIR scale for evaluating AI decision-making
1
Zitationen
3
Autoren
2025
Jahr
Abstract
Abstract People’s concerns regarding the fairness of algorithmic decision-making, coupled with its expanding utilization across various spheres of our lives underscores the need for robust measures to assess perceived fairness in standardized survey research. Existing fairness scales often suffer from inadequate content coverage, particularly in terms of Perceived Group Discrimination , and frequently employ suboptimal measurement methods, such as single-item assessments. This paper introduces the AIM-FAIR scale, a multidimensional tool grounded in classical test theory, employing Likert-scaled answering options and a reflective measurement model. Developed through four studies ( n = 1777) and validated in both English and German, the scale includes 17 items across five subscales: Perceived Consistency , Perceived Equity , Perceived Group Bias , Perceived Manipulability , and Perceived (Explanatory) Transparency . Both language versions demonstrate excellent fit indices and consistent measurement invariance across diverse backgrounds, languages, and conditions. The AIM-FAIR scale offers higher ecological validity and a more comprehensive framework for evaluating fairness in ADM, enhancing cross-cultural and cross-linguistic research on AI fairness.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.725 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.886 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.512 Zit.
Fairness through awareness
2012 · 3.302 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.202 Zit.