OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.04.2026, 11:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Large Language Models and their Applications in Mental Health (Preprint)

2025·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

<sec> <title>UNSTRUCTURED</title> Large language models (LLMs) are poised to transform mental healthcare, offering advanced capabilities in diagnosis, prognosis, and decision support. Since their inception, numerous mental health-focused LLMs have emerged in the scientific literature, reflecting the growing interest in leveraging these models across various clinical applications. With a broad range of models available, diverse tuning strategies, and multiple use cases, reviewing the current landscape is critical to understanding how LLMs are being applied. We screened 3,121 papers from PubMed, Scopus, and Web of Science focusing on model type and clinical use case. After removing duplicates and manual filtering, 42 studies were included in our final analysis. Most studies utilized OpenAI’s GPT series—GPT-4 (25 studies, 59.5%) and GPT-3.5 (16 studies, 38.1%) were the most common. Other frequently used models included BERT derived models (7 studies, 16.7%), LLaMA (8 studies, 18.6%), and RoBERTa derived models (6 studies, 14.0%). While all studies initially applied untuned LLMs, several adapted them through few-shot learning or fine-tuning to better align with specific research goals. Most models were used for diagnostic tasks (30 studies, 69.8%). The most common target conditions were depression (11 studies, 26.2%), followed by disorders such as ADHD, OCD, and suicidality. A subset of studies also examined general medical cases, which were included when mental health-related content was present. Despite rapid growth and diversity of LLM applications in mental health, the field remains nascent and exploratory. Future developments must emphasize responsible development, enhanced explainability, and deeper investigations into implementation and deployment practices centered on patient wellbeing. </sec>

Ähnliche Arbeiten

Autoren

Themen

Mental Health via WritingArtificial Intelligence in Healthcare and EducationDigital Mental Health Interventions
Volltext beim Verlag öffnen