Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI aversion or appreciation? A capability–personalization framework and a meta-analytic review.
39
Zitationen
8
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) is transforming human life. While some studies find that people prefer humans over AI (AI aversion), others find the opposite (AI appreciation). To reconcile these conflicting findings, we introduce the Capability-Personalization Framework. This theoretical framework posits that when deciding between AI and humans in a context, individuals focus on two dimensions: (a) perceived capability of AI and (b) perceived necessity for personalization. We propose that AI appreciation occurs when (a) AI is perceived as more capable than humans and (b) personalization is perceived as unnecessary in a given decision context, whereas AI aversion occurs when these conditions are not met. Our Capability-Personalization Framework is substantiated by a meta-analysis of 442 effect sizes from 163 studies (N = 82,078): AI appreciation occurs (d = 0.27, 95% CI [0.17, 0.37]) when AI is perceived as more capable than humans and personalization is perceived as unnecessary in a given decision context; otherwise, AI aversion occurs (d = -0.50, 95% CI [-0.63, -0.37]). Moderation analyses suggest that AI appreciation is more pronounced for tangible robots (vs. intangible algorithms), for attitudinal (vs. behavioral) outcomes, in between-subjects (vs. within-subjects) study designs, and in low unemployment countries, while AI aversion is more pronounced in countries with high levels of education and internet use. Overall, our integrative framework and meta-analysis advance knowledge about AI-human preferences and offer valuable implications for AI developers and users. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.