OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.04.2026, 09:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Intent, Truth and Governance: Why Current AI Safety Frameworks Fail the Humans They Claim to Protect

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

This paper examines a fundamental gap in current AI governance frameworks: the definition of safety as the prevention of harmful output, without consideration of harm caused by withholding truth. Drawing on a case study in AI wellness development, the architecture of the VERA Behavioral Reasoning System, and foundational research in neuroscience and trauma-informed care, the paper proposes a distinction between safe AI and honest AI. Current frameworks including the EU AI Act, the NIST AI Risk Management Framework, and Constitutional AI approaches consistently prioritize institutional safety over human truth. This paper argues that an AI system which withholds truth to manage user comfort or institutional risk is not safe. It is complicit. The paper presents six governance principles for truth-centered AI design, grounded in polyvagal theory, somatic marker research, and trauma-informed care. It concludes with an open invitation to researchers, regulators, and practitioners to examine the gap between current AI policy and genuine human need.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationNeuroethics, Human Enhancement, Biomedical Innovations
Volltext beim Verlag öffnen