Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Illusion of Trust in AI: Behavioural Differences Between Humans and Large Language Models
0
Zitationen
5
Autoren
2026
Jahr
Abstract
As artificial intelligence (AI) systems increasingly enter trust‐dependent domains, questions arise about whether their behaviour reflects genuine trustworthiness or merely the illusion of it. This study examined how humans and large language models (LLMs) establish and adjust trust in dynamic social interactions using a 50‐round trust game. Across 100 human participants and three leading LLMs—ChatGPT‐3.5, ChatGPT‐4o and DeepSeek‐V3—we compared trust trajectories, responsiveness to partner behaviour and reactions to unexpected outcomes. Human participants adjusted trust in line with partner trustworthiness and exhibited symmetrical responses to unexpected gains and violations. In contrast, LLMs showed fixed, model‐specific behaviour with little to no adaptation based on interaction history. Despite their cooperative appearance, AI agents lacked mechanisms for social learning and trust calibration. These findings highlight a fundamental disconnect between perceived and actual AI behaviour and underscore the need for cautious interpretation of AI trust signals in socially sensitive contexts.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.534 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.423 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.917 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.582 Zit.