Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Shadow AI thrives under punitive social evaluation
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Generative Artificial Intelligence (GenAI) tools, such as ChatGPT, offer significant performance benefits across professional tasks. Yet, their adoption in work-related contexts is complicated by social disapproval and penalties, especially under conditions of mandated transparency. In three studies (one pre-registered; n = 1,678 applicants and n = 477 evaluators), we investigate how people navigate this augmentation-approval tradeoff in an incentivized mini-job application scenario. We find that mandatory disclosure substantially reduces visible AI adoption, but prompts a covert behavioral strategy we term shadow adoption, that is, using AI in ways that avoid detection and disclosure. Strikingly, these shadow AI users produce the highest-quality applications, as rated by HR professionals who are unaware that the outputs were AI-assisted. As the knowledge about the tradeoff spreads, shadow adoption becomes more prevalent, with nearly twice as many people choosing to use shadow AI. These results reveal a misalignment between well-intended transparency rules and user incentives in work-related contexts. Policies and technologies designed to enforce ethical AI use may inadvertently encourage covert behavior, rewarding concealment over compliance.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.563 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.861 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.407 Zit.
Fairness through awareness
2012 · 3.273 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.