Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Cracking the Case Together: Role Perceptions in Human-AI Mystery Solving Dialogues
1
Zitationen
7
Autoren
2026
Jahr
Abstract
Large Language Models (LLMs) aim to mimic a natural form of human conversation, likely contributing to an anthropomorphic perception of AI in contrast to conventional human-computer interfaces. Our study explores human-AI conversations and humans’ perception of their counterpart in a collaborative mystery solving task with Anthropic’s Claude 3.5 Sonnet v2 model. We collected self-report data on participants’ perception of the interaction, measured task performance, and analyzed conversational dynamics using LLM-based emotion coding. We found that humans’ perception of AI, ranging from that of a teammate or colleague to a tool, did not necessarily impact performance in mystery solving, but correlated with aspects of the interaction itself. When participants perceived the AI as a teammate or colleague, they felt a stronger sense of team cohesion and their conversations were more collaborative, with more positive emotions. These findings may help practitioners design human-AI interfaces that foster positive interactions without endangering performance.
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.633 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.594 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.551 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.537 Zit.