OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 04.05.2026, 23:59

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Navigating fairness: introducing the multidimensional AIM-FAIR scale for evaluating AI decision-making

2025·1 Zitationen·AI & SocietyOpen Access
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2025

Jahr

Abstract

Abstract People’s concerns regarding the fairness of algorithmic decision-making, coupled with its expanding utilization across various spheres of our lives underscores the need for robust measures to assess perceived fairness in standardized survey research. Existing fairness scales often suffer from inadequate content coverage, particularly in terms of Perceived Group Discrimination , and frequently employ suboptimal measurement methods, such as single-item assessments. This paper introduces the AIM-FAIR scale, a multidimensional tool grounded in classical test theory, employing Likert-scaled answering options and a reflective measurement model. Developed through four studies ( n = 1777) and validated in both English and German, the scale includes 17 items across five subscales: Perceived Consistency , Perceived Equity , Perceived Group Bias , Perceived Manipulability , and Perceived (Explanatory) Transparency . Both language versions demonstrate excellent fit indices and consistent measurement invariance across diverse backgrounds, languages, and conditions. The AIM-FAIR scale offers higher ecological validity and a more comprehensive framework for evaluating fairness in ADM, enhancing cross-cultural and cross-linguistic research on AI fairness.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen