Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Reduced symptom reporting quality during human–chatbot versus human–physician interactions
0
Zitationen
8
Autoren
2026
Jahr
Abstract
Abstract Artificial intelligence (AI) chatbots are increasingly used for self-triage and medical advice seeking. Accurate AI performance, however, hinges on how users interact with such consumer-facing applications. While previous research has identified reservations regarding AI-generated medical advice, earlier stages of human–AI interaction, such as how symptoms are communicated, remain largely unexplored. In a preregistered between-subject experiment ( n = 500), participants were randomly assigned to provide simulated symptom reports for common medical conditions to either an AI chatbot or a human physician. We evaluated the quality of the reports for an initial medical urgency assessment using physician-validated large language model-based suitability metrics. Participants who believed they were interacting with an AI tool (versus a physician) provided lower-quality symptom reports for medical triage. Our findings indicate a bias in how users communicate symptoms in digital settings. This outcome could compromise the performance of consumer-facing AI tools in real-world applications, regardless of the underlying model’s actual capacity.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.611 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.504 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.025 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.