OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 29.04.2026, 21:06

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A (Mid)journey Through Reality: Assessing Accuracy, Impostor Bias, and Automation Bias in Human Detection of AI‐Generated Images

2025·2 Zitationen·Human Behavior and Emerging TechnologiesOpen Access
Volltext beim Verlag öffnen

2

Zitationen

5

Autoren

2025

Jahr

Abstract

While the challenge of distinguishing AI‐generated from real images is widely acknowledged, the specific cognitive biases that systematically shape human judgment in this domain remain poorly understood. It is particularly unclear how a general awareness of AI capabilities fosters novel biases, like a pervasive skepticism (“impostor bias”), and how this interacts with established phenomena like “automation bias”. This study addresses this gap by providing the first quantitative analysis of how these two biases operate across five distinct experimental variants designed to test the context‐dependency of human perception. Through a mixed‐methods study with 746 participants, we demonstrate that human authentication accuracy hovered around chance levels (ranging from 47.0% to 55.5%). However, our analysis provides robust evidence for the systematic operation of cognitive biases. We validate the presence of “impostor bias” through a consistent pattern of higher doubt for AI‐generated images and confirm “automation bias” through significant opinion changes following algorithmic suggestions. Our findings reveal that these biases are not uniform across populations: gender was a consistent predictor of automation bias, with males in all five variants showing a significantly stronger and more consistent tendency (Cohen’s d = 0.254–0.683) to be influenced by algorithmic suggestions. In contrast, age and academic background had minimal and highly localized effects. Furthermore, we identified a significant interaction between experimental stimuli and performance over time, isolating a pronounced fatigue effect to a single questionnaire variant where accuracy progressively declined (by approximately 1.7% per trial). By integrating human feedback with Grad‐CAM visualizations, we confirm a divergence between human holistic evaluation and the localized focus of machine learning models. These findings carry direct implications for policy, as discussed within the context of the European AI Act, and inform the design of human–AI systems and media literacy programs aimed at mitigating these critical cognitive vulnerabilities.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen