OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.04.2026, 00:32

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Artificial Intelligence in University Mental Health Systems: Preventing Algorithmic Stratification Through the AUGMENT Governance Framework (Preprint)

2026·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2026

Jahr

Abstract

<sec> <title>BACKGROUND</title> Universities globally face rising demand for mental health services while counseling capacity remains limited. Artificial intelligence (AI) is increasingly proposed as a scalable solution through predictive analytics, conversational agents, and automated screening. However, rapid AI deployment within university mental health infrastructures raises critical ethical, governance, and equity concerns. </sec> <sec> <title>OBJECTIVE</title> This Viewpoint examines how AI integration may reshape institutional mental health systems and introduces a governance framework to guide responsible implementation. </sec> <sec> <title>METHODS</title> This Viewpoint synthesizes interdisciplinary literature from digital psychiatry, global mental health, health informatics, and higher education policy to identify emerging ethical and structural risks associated with AI-driven mental health technologies. </sec> <sec> <title>RESULTS</title> We propose the concept of algorithmic stratification: the differential classification, prioritization, or management of student populations through algorithmic systems embedded within institutional care pathways. This phenomenon operates through four interconnected mechanisms algorithmic bias, surveillance disparities, infrastructural exclusion, and diminished human connection that form a self-reinforcing cycle. Without structural safeguards, AI risks producing a two-tiered system: augmented human therapy for privileged students, automated triage for marginalized populations. To address these challenges, we introduce the AUGMENT framework: Accessibility without surveillance, User centered co-design, Governance and auditability, Model transparency, Equity adaptation, Non-replacement of human care, and Tiered integration. </sec> <sec> <title>CONCLUSIONS</title> Artificial intelligence offers significant potential to enhance the scalability of university mental health services, but its integration requires strong governance safeguards. The AUGMENT framework provides a structured approach to ensure AI strengthens mental health systems while minimizing risks of algorithmic inequity. Future research should prioritize real-world evaluation of AI-supported mental health systems and develop institutional governance models emphasizing transparency, equity, and human-centered care. </sec> <sec> <title>CLINICALTRIAL</title> Not Applicable </sec>

Ähnliche Arbeiten

Autoren

Themen

Digital Mental Health InterventionsArtificial Intelligence in Healthcare and EducationE-Learning and COVID-19
Volltext beim Verlag öffnen