OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 02.04.2026, 19:54

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Potentially Harmful Consequences of Artificial Intelligence ( <scp>AI</scp> ) Chatbot Use Among Patients With Mental Illness: Early Data From a Large Psychiatric Service System

2026·1 Zitationen·Acta Psychiatrica ScandinavicaOpen Access
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2026

Jahr

Abstract

Chatbots driven by generative artificial intelligence (AI chatbots) have become ubiquitous [1]. While the large language model technology underlying these tools may have a huge potential for societies at large, concerns—and substantial anecdotal evidence [2]—have risen over the possibility that use of AI chatbots may be harmful for people prone to mental illness [3-5]. Specifically, it seems that interaction with AI chatbots, especially if intense/of long duration, may contribute to onset or worsening of delusions or mania, with severe or even fatal consequences [2-5]. Given the large uptake of this technology, ChatGPT—the clear market leader—passed 900 million downloads in July 2025 [1], this could pose a tangible threat to public mental health. At this stage, however, almost all reports on potentially harmful consequences of AI chatbots stems from news media or online fora [2]—and should be interpreted with the inherent limitations of these outlets in mind. Conversely, to our knowledge, there are very few accounts of this phenomenon from psychiatric services - with the first case report on delusions developed in relation to use of ChatGPT being published recently [6]. Therefore, we aimed to investigate whether there are reports compatible with potentially harmful consequences of AI chatbot use on mental health among patients with mental illness receiving care in a large psychiatric service system. This study was conducted in accordance with the method used to assess the potential impact of the COVID-19 pandemic [7] and the 2022 Russian invasion of Ukraine [8] on the mental health of patients with mental illness. Specifically, we identified all patients registered with at least one contact with the Psychiatric Services of the Central Denmark Region (CDR)—one of five Danish regions providing inpatient, outpatient and emergency psychiatric care to its approximately 1.4 million inhabitants—in the period from September 1, 2022, to June 12, 2025 (ChatGPT was launched publicly on November 30, 2022). Subsequently, we searched all clinical notes in the electronic health records of these patients for the words “chatbot” and “ChatGPT” (not case sensitive) along with the following 10 alternative spellings/misspellings for each: “chat bot,” “chat-bot,” “chattbot,” “chatboot,” “chatbott,” “chatbotts,” “chatbote,” “chatbox,” “chabot,” “chatpot,” “Chat GTP,” “ChatGBT,” “ChatGRT,” “ChatGPTT,” “Chat GPT,” “Chat-GPT,” “ChattGPT,” “ChatJPT,” “ChatGBT,” and “ChatGPT3.” These 20 alternatives were provided by ChatGPT based on the following prompt: “Please create a list of the top 20 likely misspellings or variations of the words ChatGPT and chatbot that might arise from common human spelling errors, typos, or misunderstandings of the correct terms. Provide 10 variations for each word.” We chose to include only ChatGPT and no other AI chatbot names as search term due to the uptake dominance of ChatGPT [1] and our impression of genericization of this trademark (including in Denmark). The notes identified by this search were assessed independently by SGO and CJR-T to determine whether any of the identified notes was compatible with potentially harmful consequences of the use of AI chatbots on mental health. The criteria for potentially harmful consequences were described as follows in the instructions to the assessors: “Use of AI chatbots potentially contributes to psychopathology (i.e., a harmful effect), for example, stimulates/is object for delusions, stimulates mania, is used for checking behavior in OCD/excessive focus on calories in eating disorder, or is queried regarding suicide methods.” Cases of doubt/discrepancy between the two assessors were discussed until consensus was reached. Finally, the cases of potentially harmful consequences were labeled according to the dominant type of psychopathology in agreement with Rohde et al. [7] (see the Supporting Information for details). This was first done independently by SGO and CJR-T and subsequently, cases of doubt/discrepancy were discussed until consensus was reached. The study was approved by the Legal Office of the CDR, which also waived the need for obtaining patient consent in agreement with the Danish Health Care Act, §46, Section 2 (Approval no. 1-45-70-58-25). Studies based solely on electronic health record data are exempt from ethical review board approval in Denmark (Waiver no. 1-10-72-116-25). We found that a total of 53,974 patients (52% females, median age 27 years [25%–75%: 17–44 years]) had at least one contact with the Psychiatric Services of the CDR in the period from September 1, 2022, to June 12, 2025. During this period, 10,712,856 notes were entered into the electronic health record system. Among these, 181 notes from 126 unique patients (51% females, median age 28 years [25%–75%: 21–37 years]) contained at least one of the 22 search terms with an increased rate over time (see Figure 1). The result of the consensus assessment was that among the 181 notes containing one of the 22 chatbot/ChatGPT search terms, notes from 38 unique patients (39% females, median age 28 years [25%–75%: 22–39 years]) were compatible with potentially harmful consequences of use of AI chatbots on mental health. Due to risk of identification, we are not allowed to describe the exact psychopathology of the 38 cases, but it belonged to the following overarching categories (see the Supporting Information) ordered by cumulative incidence (when n < 5, numbers are not reported due to risk of identification): Delusions (n = 11), suicidality/self-harm (n = 6), feeding or eating disorder (n = 5), mania/hypomania/mixed state (n < 5), obsessions or compulsions (n < 5), depression (n < 5), anxiety (n < 5), other symptoms/miscellaneous (n < 5), ADHD-related symptoms (n < 5), and unspecific stress (n < 5). Notably, the descriptions in the notes in question were thematically compatible with descriptions reported elsewhere, for example, AI chatbots as object/consolidator of delusions [2], AI chatbots reinforcing hypomania/mania [4], AI chatbots being used compulsively in an attempt to relieve obsessions [9], AI chatbots enabling calorie restriction/counting [10], and AI chatbots being used to seek information on suicide methods [11]. There were also examples of patients (n = 32) using AI chatbots for seemingly constructive purposes from a mental health perspective—that may have positive consequences, for example, for psychoeducation, psychotherapy (“talk therapy”), companionship against loneliness or for diagnostics (e.g., entering symptoms and requesting an interpretation). Notably, the AI chatbots have generally not been developed nor validated for these purposes, and the legal liability of the companies behind the AI chatbots in cases of their products providing wrong/harmful advice is unclear. Likewise, there were descriptions of AI chatbots aiding patients (n = 20) in various practical tasks more likely to lead to benefits than to cause harm. To our knowledge, this is the first indication of potentially harmful consequences of AI chatbot use on mental health among patients with mental illness stemming from a study based on data from a large psychiatric service system. The results must, however, be interpreted in the light of the following limitations. First and foremost, the descriptions in the clinical notes are, by no means, evidence of a causal effect (e.g., there is no knowledge of the counterfactual: i.e., what would have happened had the patients not interacted with an AI chatbot). Second, this study is based on data from everyday clinical practice where the patients were not systematically questioned about AI chatbot use. Third, we employed a quite narrow search focusing exclusively on 22 search terms (chatbot, ChatGPT, and 10 alternative spellings of each of the two). It follows from the two latter limitations that our results should also not be interpreted from an absolute perspective, that is, the results do not speak to the incidence rate of potentially harmful consequences of AI chatbot use among patients with mental illness. In conclusion, with the substantial caveats described above in mind, the results of this study support the notion that use of AI chatbots may have a negative impact on the mental health of patients with mental illness, especially regarding delusions. Mental health professionals should be aware of this possibility and guide their patients accordingly, as it seems that some patients would likely benefit from reduced/no use of AI chatbots in their current form. The authors are grateful to Bettina Nørremark and Anders Ørberg from the Psychiatric Services of the Central Denmark Region for their assistance with extraction and visualization of data. There was no funding for this study. Søren Dinesen Østergaard received the 2020 Lundbeck Foundation Young Investigator Prize. Furthermore, Søren Dinesen Østergaard owns/has owned units of mutual funds with stock tickers DKIGI, IAIMWC, SPIC25KL, and WEKAFKI, and owns/has owned units of exchange traded funds with stock tickers BATE, TRET, QDV5, QDVH, QDVE, SADM, IQQH, USPY, EXH2, 2B76, IS4S, OM3X, EUNL, and SXRV. Outside this study, Søren Dinesen Østergaard reports funding from the Lundbeck Foundation (grant numbers: R358-2020-2341 and R344-2020-1073), the Danish Cancer Society (grant number: R283-A16461), the Danish Agency for Digitisation Investment Fund for New Technologies (grant number 2020-6720), and Independent Research Fund Denmark (grant 7016-00048B, 2096-00055A). The other authors declare no conflicts of interest. The data cannot be shared due to restrictions enforced by Danish law for protecting patient privacy. Data S1: acps70068-sup-0001-Supinfo.docx. Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Digital Mental Health InterventionsArtificial Intelligence in Healthcare and EducationMental Health via Writing
Volltext beim Verlag öffnen