Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop
28
Zitationen
12
Autoren
2025
Jahr
Abstract
Background: Digital mental health interventions, such as artificial intelligence (AI) conversational agents, hold promise for improving access to care by innovating therapy and supporting delivery. However, little research exists on patient perspectives regarding AI conversational agents, which is crucial for their successful implementation. This study aimed to fill the gap by exploring patients' perceptions and acceptability of AI conversational agents in mental healthcare. Methods: Adults with self-reported mild to moderate anxiety were recruited from the UMass Memorial Health system. Participants engaged in semi-structured interviews to discuss their experiences, perceptions, and acceptability of AI conversational agents in mental healthcare. Anxiety levels were assessed using the Generalized Anxiety Disorder scale. Data were collected from December 2022 to February 2023, and three researchers conducted rapid qualitative analysis to identify and synthesize themes. Results: The sample included 29 adults (ages 19-66), predominantly under age 35, non-Hispanic, White, and female. Participants reported a range of positive and negative experiences with AI conversational agents. Most held positive attitudes towards AI conversational agents, appreciating their utility and potential to increase access to care, yet some also expressed cautious optimism. About half endorsed negative opinions, citing AI's lack of empathy, technical limitations in addressing complex mental health situations, and data privacy concerns. Most participants desired some human involvement in AI-driven therapy and expressed concern about the risk of AI conversational agents being seen as replacements for therapy. A subgroup preferred AI conversational agents for administrative tasks rather than care provision. Conclusions: AI conversational agents were perceived as useful and beneficial for increasing access to care, but concerns about AI's empathy, capabilities, safety, and human involvement in mental healthcare were prevalent. Future implementation and integration of AI conversational agents should consider patient perspectives to enhance their acceptability and effectiveness.
Ähnliche Arbeiten
Amazon's Mechanical Turk
2011 · 10.042 Zit.
The Epidemiology of Major Depressive Disorder
2003 · 7.978 Zit.
The Transtheoretical Model of Health Behavior Change
1997 · 7.734 Zit.
Acute and Longer-Term Outcomes in Depressed Outpatients Requiring One or Several Treatment Steps: A STAR*D Report
2006 · 5.476 Zit.
Depression Is a Risk Factor for Noncompliance With Medical Treatment
2000 · 4.146 Zit.