Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Scholarly Integrity and Generative <scp>AI</scp> : Five Boundary Violations for <scp>IS</scp> Scholarship
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Generative AI (GAI) is no longer a speculative add-on to academic work. By 2026, major institutions had moved beyond tentative warnings toward durable rules and infrastructures: the International Committee of Medical Journal Editors (ICMJE) has added dedicated guidance on GAI use by authors, reviewers, and editors, and Nature has published work demonstrating the end-to-end automation of parts of the research process. For the Information Systems Journal (ISJ), the issue is not whether GAI can simply be ignored. ISJ's current author and reviewer guidelines already stipulate that authors must not use GAI for any intellectual task, including problem formulation, literature review, data analysis, or more general writing (Davison 2025). Implicitly, the actions that authors take as they plan for the revision process, and the writing up of revision notes, are also considered to constitute intellectual tasks and so are similarly restricted. We now specify that authors must explain in a cover letter how, if at all, they have used GAI tools in any respect. The same transparency requirement applies to reviewers and editors, who must not outsource either their critical analysis of a paper under review or their composition of the review itself to a GAI tool. More broadly, most (if not all) academic journals' ethics guidance emphasizes disclosure, human accountability, authorship limits, and confidentiality in peer review (ICMJE 2026; Lu et al. 2026). Nevertheless, even as GAI tools become more popular, and notwithstanding the more or less restrictive policies promulgated by publishers and editors, a broader scholarly consensus as to what constitutes appropriate use remains elusive. Researchers remain divided on what counts as acceptable GAI use in writing, analysis, and review. For instance, as noted above, at the ISJ a strict line is taken such that the use of GAI tools for any intellectual activity is not permitted. However, at some other journals a more relaxed regime appears to be in place, with some editors even encouraging GAI use. Meanwhile, recent controversies, such as the rejection of hundreds of conference papers due to prohibited GAI use, suggest that the central question is no longer whether GAI can accelerate the research production process, but under what conditions that acceleration remains legitimate and accountable (Gibney 2026; Kwon 2025; Naddaf 2025). This editorial builds on ISJ's own ongoing reflection on what responsible IS scholarship requires. These reflections have appeared over the last decade in editorials.1 For instance, Davison and Tarafdar (2022) articulate cultural values for ISJ that include transparency, integrity, fairness, accountability, and developmental care. Davison (2021) stresses the importance of contextual knowledge; Chatterjee and Davison (2021) warn against formulaic gap-spotting; Díaz Andrade et al. (2023) insist on meaningful theoretical contribution; and Davison et al. (2026) reiterate the importance of IS research in and for practice. Within the GAI conversation specifically, Davison et al. (2023) opened the debate about GAI as research assistant or co-author; Davison et al. (2024) examined the ethics of using GAI for qualitative data analysis; and Riemer et al. (2026) argued that GAI should be treated neither as ‘just another IT artefact’ nor as a colleague. The present editorial extends the conversation by asking when the use of GAI within the research workflow transgresses editorial boundaries at the ISJ, and what actions should then result. Recently, the ISJ and most other academic journals of repute have seen a rapid increase in boundary violations, that is, violations against the core values of academia, triggered by GAI use. Editors routinely encounter submissions in which reference lists contain hallucinated sources and real articles cited as support for claims they do not in fact make. More troublingly, such problems are not always confined to weak or easily rejected manuscripts: fabricated, distorted, or misapplied references can survive peer review and find their way into accepted articles because GAI-generated prose often mimics the surface features of competent scholarship while concealing failures of verification. Beyond hallucinated references, further problems are emerging throughout the research process. These include: synthetic literature reviews that manufacture a false sense of consensus; invented summaries or quotations from prior work; decontextualized coding and interpretation of qualitative material; unverified code and statistical outputs; and opaque reliance on GAI for framing research questions, identifying theoretical lenses, and/or generating contributions. Additional concerns arise when authors upload confidential data, reviewer reports, interview transcripts, or organizational material into external (e.g., GAI) systems without authorization. The cumulative effect is not simply technical error: it is an erosion of provenance, accountability, contextual sensitivity, and trust in the scholarly record. Here we focus primarily on authorial uses of GAI within the research and writing workflow. Limited, disclosed, and human-verified uses, for example, language polishing, translation support, exploratory searching, or coding assistance checked by the authors, may be compatible with rigorous scholarship. Problems arise when GAI obscures provenance, simulates contextual understanding, hollows out theory, or displaces accountable human judgment. Against this backdrop, we identify five boundary violations in GAI use for ISJ scholarship and outline proportionate editorial responses. The debate becomes more tractable when translated into distinct boundary violations. The issue is not whether GAI is simply good or bad, but where it changes the conditions of legitimate scholarship. Some uses accelerate routine work without undermining integrity; other uses obscure provenance, import fabricated claims, create false appearances of contextual mastery, hollow out theory, or delegate judgment to a system that cannot bear scholarly responsibility. Recent policy frameworks are increasingly specific on these boundaries, especially around disclosure, confidentiality, authorship, and reviewer conduct (ICMJE 2026; UK Research and Innovation 2024). Table 1 summarizes five boundary violations, why each matters for ISJ, and possible editorial responses. The intention is not to create a mechanical rejection template, but to clarify the editorial boundaries around trustworthy IS scholarship (Díaz Andrade et al. 2023). It needs to be observed that these boundary violation types are indicative of the kinds of situations we see, but neither is this a complete list nor do the descriptions provide precise measures of how to determine if a boundary has been violated. Whereas hallucinated references are easier to detect, a GAI-generated summary may be much more difficult to confirm. A simple review of text may not be sufficient to reveal with a high degree of certainty that a particular boundary violation has occurred; instead, the existence of the boundary violation may become apparent during the scholarly conversation between authors, reviewers and editors, for example, when authors fail to address particular revision requests effectively. More generally, we find poor quality scholarship is often associated with GAI use, and thus that while the rejection decision may be precipitated by the GAI use, that use is neither the immediate driver of the rejection, nor the justification for the rejection. The distinctions in Table 1 matter because not all GAI-related failures belong in the same category. Theoretical hollowing or contextual detachment may make a manuscript unpublishable without the authors necessarily being sanctionable. Concealed use, fabricated material, or epistemic delegation are more serious because they obscure provenance, compromise validity, or present delegated machine output as if it were accountable scholarship. Treating every failure as misconduct would be a mistake; treating every failure as merely weak scholarship would be equally mistaken. That, in turn, changes the editorial question. The aim is not to punish all GAI use. ISJ should remain open to limited, disclosed, human-supervised uses that reduce language barriers or support routine, non-intellectual tasks. At the same time, we strongly encourage human-authored work! We believe that human agency cannot effectively be replicated, and that attempts to do so are likely to lead to low-quality non-human outputs that will simply be rejected. We respectfully suggest that authors who do the hard work themselves will reap the benefits. It may take a bit longer. It may be less easy. But at the ISJ we always favour quality over quantity. We do not want to see either the scholarly deskilling that may result from reliance on GAI tools or the cognitive atrophication that may develop, such that human beings (homo sapiens) deteriorate into a cognitively impoverished form (homo destitutus). We care that GAI should strengthen scholarship ethically, yet should not disempower humans or eliminate either their agency or their responsibility. Authors who submit a weak manuscript are likely to find that it is rejected. Authors who attempt to deceive readers as to the provenance or authenticity or quality of their manuscript may suffer a more serious outcome, since rejection alone may be insufficient. In order to bring such outcomes to fruition, editors need a proportionate ladder of responses, ranging from requests for clarification and revision to desk rejection, temporary submission restrictions, or further escalation in especially egregious cases. Premier journals reject the overwhelming majority of submissions. Most of those decisions reflect ordinary disagreements about scope, contribution, theory, method, or fit. GAI does not alter that basic logic. What it does change is the ease with which thin, generic, or fabricated work can be made to look polished, coherent, and submission-ready. Our view is that stronger action should never be automatic: it requires robust evidence, due process, and must always be proportionate to the situation and its gravity. Equally, ISJ's cultural values imply that editors should neither normalize concealed GAI use nor overreact to all machine assistance. Limited, disclosed, and human-verified use may be compatible with rigorous IS scholarship. What is incompatible is the substitution of polished machine output for contextual knowledge, theory development, practitioner sensitivity, and accountable human judgment. Thinking slowly about GAI, then, is not a call to resist technology in the abstract. It is a call to protect the conditions under which IS research remains trustworthy, explanatory, contextually grounded, and useful (Davison 2021; Davison and Tarafdar 2022; Díaz Andrade et al. 2023; Davison et al. 2024). We gratefully acknowledge the critical thoughts of Antonio Díaz Andrade, Sven Laumer, Marco Marabelli, and Petter Nielsen on an earlier draft of this editorial. No data was used in the preparation of this editorial.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.646 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.554 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.071 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.851 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.