OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.05.2026, 07:02

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Obvious <scp>artificial intelligence</scp>‐generated anomalies in published journal articles: A call for enhanced editorial diligence

2024·3 Zitationen·Learned PublishingOpen Access
Volltext beim Verlag öffnen

3

Zitationen

1

Autoren

2024

Jahr

Abstract

In the last decade, artificial intelligence (AI) has revolutionized virtually every aspect of our lives, marking a transformative era of technological advancement and integration (Bohr & Memarzadeh, 2020; Verganti et al., 2020). From the way we interact with our devices through voice-activated assistants, to the convenience of personalized recommendations on streaming services, AI has seamlessly woven itself into the fabric of daily existence. This pervasive influence of AI extends beyond everyday consumer technology, profoundly impacting sectors such as healthcare (Rajpurkar et al., 2022), where algorithms diagnose diseases with unprecedented accuracy, and transportation (Bharadiya, 2023), with the advent of autonomous vehicles reshaping notions of mobility and safety. This widespread integration of AI has not spared the field of academic publishing (Ganjavi et al., 2024), where its influence has instigated a series of challenges and potential pitfalls. The introduction of AI into research and writing processes, intended to facilitate and enhance the arduous tasks of data analysis and literature review, has instead opened a Pandora's box of issues. Among the most significant concerns are ethical and practical issues related to the application of AI in publication (Ganjavi et al., 2024; Samuel et al., 2021). Recognizing these dynamics, the STM report (2023) offers practical guidelines tailored specifically for the use of generative AI within this field. It clearly differentiates the roles of generative AI, from its simple use as an authorial aid, which necessitates no further reporting, to its more advanced implementations. Moreover, universities and publishers globally are developing policies to govern the use of generative AI in academic writing. These guidelines are crafted to steer authors through the intricate and diverse applications of AI, ensuring that its advantages are maximized while effectively mitigating potential risks (Gulumbe et al., 2024). Despite these guidelines, the academic community has witnessed the troubling emergence of clear AI-generated anomalies within published articles (Wong, 2024). Such instances serve as a stark reminder of the fine balance between leveraging AI for its undeniable benefits and the imperative need for the academic community to address AI-related discrepancies. These discrepancies not only undermine the integrity of scholarly work but also pose a threat to the foundational principles of academic rigour and trust. The crux of the issue lies not in the use of AI per se but in the apparent lack of editorial oversight that has allowed evidently flawed AI-generated content to slip through the rigorous checks and balances of the peer-review process. Recent events underline this concern, illuminating a dire need for the implementation of more stringent editorial standards. For example, a recent paper entitled ‘Cellular Functions of Spermatogonial Stem Cells in Relation to the JAK/STAT Signaling Pathway’, published by Frontiers in Cell and Developmental Biology in February 2024 and now retracted (Guo et al., 2024), became a subject of controversy in both social and mainstream media. In the paper, researchers utilized Midjourney to depict a rat's reproductive organs; however, the result was a cartoon rodent with comically oversized genitalia, annotated with nonsensical labels. In another example, an article entitled ‘The Three-Dimensional Porous Mesh Structure of Cu-Based Metal-Organic-Framework – Aramid Cellulose Separator Enhances the Electrochemical Performance of Lithium Metal Anode Batteries’ (Zhang et al., 2024) published in Q1 journal, Surfaces and Interfaces, with an impact factor of 6.2, featured an introduction clearly bearing the hallmarks of AI-generated text. The introduction was marked by a distinct lack of critical analysis and coherence upon closer examination. Similarly, in a seperate study published in Radiology Case Reports, titled ‘Successful Management of an Iatrogenic Portal Vein and Hepatic Artery Injury in a 4-Month-Old Female Patient: A Case Report and Literature Review’ (Bader et al., 2024) a segment of the text notably diverges from the expected academic discourse. Specifically, the passage outlines, ‘In summary, the management of bilateral iatrogenic…’ abruptly transitioning into a disclaimer typical of AI-generated content, stating, ‘I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can offer general guidance on managing injuries to the hepatic artery, portal vein, and bile duct. However, for individual cases, it's imperative to seek the expertise of a medical professional who possesses detailed knowledge of the patient's medical history and can offer tailored advice’. This excerpt underlines the critical issue of including AI-generated text within scholarly articles, spotlighting the pressing need for rigorous editorial oversight to maintain the integrity and accuracy of academic publishing. These instances are just a few examples of the poor handling of manuscripts and are not in isolation. By simply searching phrases like ‘As an AI language model’, ‘I don't have access to real-time data’, and ‘As of my last knowledge update’, one can find hundreds of papers with text generated by AI. These papers, which presumably passed through initial assessment, peer review, and copy-editing processes, highlight a significant oversight in the current academic publishing paradigm. These instances are particularly alarming when considering the standing of the publishers involved—esteemed institutions that have long been regarded as gatekeepers of quality and scholastic excellence. Such oversights suggest that current editorial processes may not be equipped to identify the subtleties of AI-generated text, which often mimics the structure and tone of scholarly writing but lacks the nuanced insight and critical thinking foundational to academic discourse. The integration of AI into scholarly publishing introduces several significant gatekeeping challenges (Gulumbe et al., 2024; Wise et al., 2024), key among them being the development of reliable mechanisms for detecting AI-generated content (Chaka, 2023; Wang et al., 2023). Despite substantial efforts from both academic and technological sectors, the creation of a dependable generative AI detection tool has yet to be realized. Current methodologies often falter in accurately differentiating between nuances in human and AI-generated texts (Chaka, 2023), an issue exacerbated by the continuous evolution and increasing sophistication of AI technologies. The inherent variability of AI-generated content, particularly its capacity to emulate human linguistic traits, poses substantial challenges for existing algorithms, leading to inconsistent results. This issue underlines the urgent need for ongoing research and enhancement of AI-detection techniques to keep pace with the advancements in generative AI capabilities. In addition to the technical challenges, the gatekeeping role is further complicated by ethical and operational considerations (Gendron et al., 2022; Wise et al., 2024). The subtlety with which AI tools now mimic human reasoning and writing styles raises profound ethical questions about authorship and originality (Gulumbe et al., 2024), complicating the traditional roles of editors and reviewers. There is also a significant concern about the transparency of AI use in research and publication processes (Gulumbe et al., 2024). Ensuring that authors disclose the extent of AI involvement in their work is crucial for maintaining the integrity of the academic record, but not enough. Moreover, the rapid adaptation of AI tools across different disciplines demands a scalable and flexible approach to gatekeeping that can accommodate diverse fields and types of content. As AI technologies permeate deeper into the fabric of academic work, the scholarly community must not only develop robust technological solutions but also foster a culture of integrity and transparency that upholds the foundational principles of scholarly communication. In response to the advancements in AI, academic journals have adopted varying stances on the incorporation of AI-generated visual content (Gulumbe et al., 2024; Inam et al., 2024). Springer Nature, distinguishing itself with a more stringent approach, has prohibited the use of AI-generated images, videos, and illustrations in the majority of its journal articles, with an exception for those directly addressing AI topics (Wong, 2024). Conversely, journals within the Science family adopt a policy requiring explicit editorial consent for the inclusion of AI-generated text, figures, or images, unless the manuscript explicitly focuses on AI or machine learning themes (Wong, 2024). On another front, PLoS One embraces the utilization of AI tools under the condition that researchers fully disclose the specific tools employed, their application methodology (Wong, 2024), and the measures taken to ensure the integrity of the resultant content (Wong, 2024). While the measures taken by journal publishers—ranging from outright bans to mandated disclosures of AI-generated content—represent a step toward addressing the challenges posed by AI in academic publishing, these policies alone prove insufficient. The simple act of declaring AI use does not safeguard against the publication of gibberish or ensure the integrity of the content, as there could still be instances where authors either neglect to declare AI assistance or, despite declarations, manage to publish flawed content. This situation aggrandizes the necessity for the academic gatekeepers, including editorial teams and publishers, to intensify their efforts beyond mere policy enactments. To strengthen the foundation of academic integrity in the face of the proliferation of AI-generated content, this piece, therefore, advocates for the implementation of the following strategies: It is imperative to establish mandatory training modules that accentuate digital literacy, furnish technical proficiency in AI detection tools, and cultivate the ability to critically distinguish AI-generated content from human-authored texts. These modules should encompass case studies, provide updates on the latest trends in AI writing, and incorporate hands-on sessions with AI-detection software. Academic journals are encouraged to integrate specialized software tools tailored for the detection of AI-generated content. Comparable to advanced plagiarism detectors, these tools must be routinely updated to keep pace with the rapid advancements in AI technology and should be an integral part of the standard toolkit for manuscript assessment. Concurrently, it is imperative to formulate a universally accepted protocol that delineates the steps for detecting AI-generated content. This protocol should draw parallels with the COPE guidelines for ethical publishing practices, ensuring a standardized approach across the academic publishing landscape. AI and its implications in publishing should be a permanent item on the agenda for regular editorial meetings, ensuring sustained vigilance and timely adaptation of editorial policies as AI technologies continue to evolve. Cultivate a culture of integrity and transparency by mandating that authors disclose the extent of AI's involvement in their submissions. According to the recent STM report (2023), a clear distinction should be made between basic uses of generative AI as a support tool, which do not require additional reporting, and more sophisticated applications that could significantly alter content and thus necessitate comprehensive disclosure. This distinction should be clearly communicated in editorial policies to guide authors in their reporting obligations. The disclosure should not just be a formal disclaimer but a detailed account that substantiates the manuscript's credibility and aligns with ethical guidelines on AI use, helping authors understand and adhere to the expected standards. Facilitate open discussions on the ethical dimensions of AI in research through editorials, dedicated workshops, and policy discussions at academic conferences, thus broadening the understanding of acceptable AI use in scholarly writing and its appropriate disclosure. Maintain the academic community's awareness of the latest developments and challenges in AI through newsletters, special journal issues focusing on AI in academic writing, and online forums for sharing experiences and solutions. Encourage partnerships with technology developers to ensure that the tools employed by journals remain cutting-edge and effective in identifying AI-generated content. Formulate a universally accepted protocol that delineates the steps for detecting AI-generated content, drawing parallels with the COPE guidelines for ethical publishing practices. Implementing these targeted strategies, academic journals and publishers can ensure a strategic adaptation to the integration of AI, thus safeguarding the integrity and reliability of scholarly outputs in an era characterized by rapid technological progression. This proactive stance is essential for confronting current challenges and mitigating future issues, thereby guaranteeing a robust and integrity-filled future for academic publishing. The emergence of AI-generated anomalies within the pages of esteemed scholarly publications has sounded an urgent alarm across the academic publishing landscape. This situation demands a concerted response from all involved parties—authors, reviewers, editors, and publishers alike—to adopt and enforce more rigorous editorial standards and practices. Such measures are critical not only to preserving the credibility of individual works and the journals that disseminate them but also to maintaining the foundational trust essential to scholarly discourse. In alignment with these enhanced practices, the adoption of specialized software tools tailored for identifying AI-generated content, along with the development of universally recognized AI detection protocols, should be considered integral components. These tools and protocols will bolster the editorial process, ensuring that publications can effectively manage and mitigate the complexities introduced by AI, thus upholding the integrity and reliability of scholarly communication. Other measures, which include regular updates to keep pace with the swift advancements in AI technology, are crucial for safeguarding the integrity and reliability of scholarly communications. This pivotal moment serves as both a wake-up call and a guiding light, steering us toward the implementation of advanced, forward-looking strategies that ensure the enduring quality and dependability of academic output. As we navigate this era increasingly shaped by AI, our collective efforts will continue to reinforce the legacy and future of scholarly communication, affirming our dedication to the core principles of academic excellence and integrity.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationBiomedical and Engineering EducationAcademic Publishing and Open Access
Volltext beim Verlag öffnen