Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Do Large Language Models Evoke Affective Automation-Related Job Insecurity, Turnover and Learning Intentions?
1
Zitationen
4
Autoren
2026
Jahr
Abstract
Large language models like ChatGPT dominate the public discourse with their potential to take over human labor. ChatGPT already outperforms human employees in specific cognitive tasks in white-collar jobs, potentially substituting work tasks. However, whether employees perceive ChatGPT as threatening to their employment and how they intend to react to such potential threat remains largely unclear. Drawing from the conservation of resources theory and recent empirical findings on embodied robots, we investigate whether mere exposure to ChatGPT (Study 1) or use of ChatGPT (Study 2) evokes affective automation-related job insecurity and, subsequently, turnover and learning intentions. Further, we investigate the moderating role of core self-evaluations. The results of two online-experiments with German white-collar workers (N1 = 254; N2 = 391) demonstrate that neither mere exposure to ChatGPT nor use of ChatGPT elevate affective automation-related job insecurity, turnover and learning intentions. Findings from an exploratory analysis in Study 2 support that white-collar workers who perceive artificial intelligence to exceed their performance report higher affective automation-related job insecurity after the use of ChatGPT. Overall, the results do not provide empirical evidence of increased affective automation-related job insecurity and turnover and learning intentions, regardless of participants’ level of core self-evaluations. Thus, they do not support the transferability of positive effects from embodied robots on job insecurity to large language models.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.