OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.04.2026, 09:59

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

RESPONSIBLE USE OF AI-GENERATED CONTENT IN VIETNAMESE SCHOLARLY PUBLISHING: EVIDENCE FROM JOURNAL POLICIES AND EDITORIAL PRACTICES

2026·0 Zitationen·Veredas do Direito Direito Ambiental e Desenvolvimento SustentávelOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

The rapid diffusion of generative artificial intelligence (GenAI) tools—especially large language models (LLMs)—is reshaping scholarly publishing worldwide. While these tools can support language editing, translation, and workflow efficiency, they also raise integrity risks, including fabricated citations, unverifiable claims, undisclosed ghostwriting, confidentiality breaches in peer review, and contested ownership of AI-assisted outputs. Vietnam’s journal ecosystem is currently navigating internationalization pressures (e.g., indexing and visibility goals) alongside uneven editorial capacity and fragmented policy infrastructure, making it a critical setting for examining responsible governance of AI-generated content (AIGC). This study reports an exploratory policy-and-practice mapping across five Vietnam-affiliated publishing contexts (university-based open access journals, an internationally co-published journal, a defense-related journal, and law/social-science publishing). Using structured qualitative content analysis, we identify shared norms (e.g., “AI cannot be an author,” accountability remains human) but also substantial variation in disclosure requirements, treatment of AI-generated images and references, restrictions on reviewer use of AI tools, and clarity of enforcement mechanisms. Building on these findings and international literature, we propose a Vietnam-tailored governance framework that combines (i) risk-tiered allowable uses, (ii) mandatory disclosure and provenance documentation, (iii) human-in-the-loop editorial controls, and (iv) capacity-building measures aligned with open science principles. The paper contributes practical templates (disclosure language, policy clauses, and a workflow-integrated checklist) to support journals, editors, and research institutions seeking credible, implementable AI governance.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AIComputational and Text Analysis Methods
Volltext beim Verlag öffnen