OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 29.03.2026, 16:52

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Risks of applying artificial intelligence in psychological support: A psychologist and developer’s perspective

2025·0 Zitationen·Russian Journal of Education and PsychologyOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

Background. In recent years, there has been observed rapid evolution of artificial intelligence (AI) technologies and their increasing integration into various domains, including psychological support. Despite the growing popularity of AI-based services, their application in psychotherapeutic practice entails numerous ethical, technical, and socio-legal risks. This article examines the key challenges associated with the use of AI in psychological support, including the simulation of empathy, anthropomorphism, AI «hallucinations», data privacy, and issues of accountability. Purpose.To analyze the risks of using AI tools psychological support. Materials and methods.The main research method is systems analysis with a review of scientific literature and regulatory documents. The study is based on: academic publications (2020–2025) in psychology, AI, and digital ethics; empirical data on user interactions with AI services; legal regulations and recommendations concerning AIcontrol; and real-world cases of AI application in psychological support. Results.The study identifies key risks associated with the use of AI for psychological support. Empathy substitution: AI imitates emotional support without genuine understanding. Anthropomorphism: users attribute human traits to AI, which might lead to psychological dependence on AI. AI «hallucinations»: generation of false or harmful recommendations. Threats to confidentiality such as data leaks and no legal safeguards. Legal uncertainty: absence of clear norms on liability for AI-driven actions. The findings highlight the need for clinical validation of AI-based services and the development of ethical standards for their implementation in mental health practice.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Digital Mental Health InterventionsArtificial Intelligence in Healthcare and EducationCOVID-19, Geopolitics, Technology, Migration
Volltext beim Verlag öffnen