OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.04.2026, 13:10

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Illusion of Trust in AI: Behavioural Differences Between Humans and Large Language Models

2026·0 Zitationen·Human Behavior and Emerging TechnologiesOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

As artificial intelligence (AI) systems increasingly enter trust‐dependent domains, questions arise about whether their behaviour reflects genuine trustworthiness or merely the illusion of it. This study examined how humans and large language models (LLMs) establish and adjust trust in dynamic social interactions using a 50‐round trust game. Across 100 human participants and three leading LLMs—ChatGPT‐3.5, ChatGPT‐4o and DeepSeek‐V3—we compared trust trajectories, responsiveness to partner behaviour and reactions to unexpected outcomes. Human participants adjusted trust in line with partner trustworthiness and exhibited symmetrical responses to unexpected gains and violations. In contrast, LLMs showed fixed, model‐specific behaviour with little to no adaptation based on interaction history. Despite their cooperative appearance, AI agents lacked mechanisms for social learning and trust calibration. These findings highlight a fundamental disconnect between perceived and actual AI behaviour and underscore the need for cautious interpretation of AI trust signals in socially sensitive contexts.

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationAI in Service InteractionsExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen