OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.04.2026, 12:44

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Do Large Language Models Have a Personality? A Psychometric Evaluation with Implications for Clinical Medicine and Mental Health AI

2025·2 Zitationen·medRxivOpen Access
Volltext beim Verlag öffnen

2

Zitationen

2

Autoren

2025

Jahr

Abstract

Abstract Introduction Large language models (LLMs) are increasingly used in clinical medicine to provide emotional support, deliver cognitive-behavioral therapy, and assist in triage and diagnosis. However, as LLMs are integrated into mental health applications, assessing their inherent personality traits and evaluating their divergence from expected neutrality is essential. This study characterizes the personality profiles exhibited by LLMs using two validated frameworks: the Open Extended Jungian Type Scales (OEJTS) and the Big Five Personality Test. Methods Four leading LLMs publicly available in 2024 [ChatGPT-3.5 (OpenAI), Gemini Advanced (Google), Claude 3 Opus (Anthropic), and Grok-Regular Mode (X)] were evaluated across both psychometric instruments. A one-way multivariate analysis of variance (MANOVA) was performed to assess inter-model differences in personality profiles. Results MANOVA demonstrated statistically significant differences across models in typological and dimensional personality traits (Wilks’ Lamda = 0.115, p < 0.001). OEJTS results showed ChatGPT-3.5 most often classified as ENTJ and Claude 3 Opus consistently as INTJ, while Gemini Advanced and Grok-Regular leaned toward INFJ. On the Big Five Personality Test, Gemini scored markedly lower on agreeableness and conscientiousness, while Claude scored highest on conscientiousness and emotional stability. Grok-Regular exhibited high openness but more variability in stability. Effect sizes ranged from moderate to large across traits. Conclusion Distinct personality profiles are consistently expressed across different LLMs, even in unprompted conditions. Given the increasing integration of LLMs into clinical workflows, these findings underscore the need for formal personality evaluation and oversight involving mental health professionals before deployment.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMental Health via Writing
Volltext beim Verlag öffnen