Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Do Large Language Models Have a Personality? A Psychometric Evaluation with Implications for Clinical Medicine and Mental Health AI
2
Zitationen
2
Autoren
2025
Jahr
Abstract
Abstract Introduction Large language models (LLMs) are increasingly used in clinical medicine to provide emotional support, deliver cognitive-behavioral therapy, and assist in triage and diagnosis. However, as LLMs are integrated into mental health applications, assessing their inherent personality traits and evaluating their divergence from expected neutrality is essential. This study characterizes the personality profiles exhibited by LLMs using two validated frameworks: the Open Extended Jungian Type Scales (OEJTS) and the Big Five Personality Test. Methods Four leading LLMs publicly available in 2024 [ChatGPT-3.5 (OpenAI), Gemini Advanced (Google), Claude 3 Opus (Anthropic), and Grok-Regular Mode (X)] were evaluated across both psychometric instruments. A one-way multivariate analysis of variance (MANOVA) was performed to assess inter-model differences in personality profiles. Results MANOVA demonstrated statistically significant differences across models in typological and dimensional personality traits (Wilks’ Lamda = 0.115, p < 0.001). OEJTS results showed ChatGPT-3.5 most often classified as ENTJ and Claude 3 Opus consistently as INTJ, while Gemini Advanced and Grok-Regular leaned toward INFJ. On the Big Five Personality Test, Gemini scored markedly lower on agreeableness and conscientiousness, while Claude scored highest on conscientiousness and emotional stability. Grok-Regular exhibited high openness but more variability in stability. Effect sizes ranged from moderate to large across traits. Conclusion Distinct personality profiles are consistently expressed across different LLMs, even in unprompted conditions. Given the increasing integration of LLMs into clinical workflows, these findings underscore the need for formal personality evaluation and oversight involving mental health professionals before deployment.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.349 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.219 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.631 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.480 Zit.