Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
RESPONSIBLE USE OF AI-GENERATED CONTENT IN VIETNAMESE SCHOLARLY PUBLISHING: EVIDENCE FROM JOURNAL POLICIES AND EDITORIAL PRACTICES
0
Zitationen
1
Autoren
2026
Jahr
Abstract
The rapid diffusion of generative artificial intelligence (GenAI) tools—especially large language models (LLMs)—is reshaping scholarly publishing worldwide. While these tools can support language editing, translation, and workflow efficiency, they also raise integrity risks, including fabricated citations, unverifiable claims, undisclosed ghostwriting, confidentiality breaches in peer review, and contested ownership of AI-assisted outputs. Vietnam’s journal ecosystem is currently navigating internationalization pressures (e.g., indexing and visibility goals) alongside uneven editorial capacity and fragmented policy infrastructure, making it a critical setting for examining responsible governance of AI-generated content (AIGC). This study reports an exploratory policy-and-practice mapping across five Vietnam-affiliated publishing contexts (university-based open access journals, an internationally co-published journal, a defense-related journal, and law/social-science publishing). Using structured qualitative content analysis, we identify shared norms (e.g., “AI cannot be an author,” accountability remains human) but also substantial variation in disclosure requirements, treatment of AI-generated images and references, restrictions on reviewer use of AI tools, and clarity of enforcement mechanisms. Building on these findings and international literature, we propose a Vietnam-tailored governance framework that combines (i) risk-tiered allowable uses, (ii) mandatory disclosure and provenance documentation, (iii) human-in-the-loop editorial controls, and (iv) capacity-building measures aligned with open science principles. The paper contributes practical templates (disclosure language, policy clauses, and a workflow-integrated checklist) to support journals, editors, and research institutions seeking credible, implementable AI governance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.349 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.219 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.631 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.480 Zit.