OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.03.2026, 15:07

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Disclosing use of Artificial Intelligence: Promoting transparency in publishing

2023·5 Zitationen·Lung IndiaOpen Access
Volltext beim Verlag öffnen

5

Zitationen

1

Autoren

2023

Jahr

Abstract

Artificial intelligence (AI) technologies such as machine learning algorithms and natural language processing have permeated virtually every field of our lives including clinical practice and research. In the field of research, technologies are capable of analysing large amounts of data and identifying patterns and trends that human researchers may miss. These analytical capabilities can significantly accelerate medical diagnosis and research and improve our understanding of complex diseases and treatments. AI and systems based on large-scale language models (LLMs) also have the potential to generate research manuscripts. While this ability does hold promise, there are concurrent ethical challenges. AI can expedite the process of data analysis, hypothesis testing and knowledge generation, potentially accelerating medical advancements, however, concerns about bias, transparency and the role of human expertise in scientific endeavours have arisen. The LLMs are able to write scholarly text, aggregate, paraphrase and summarise it and thus transform scholarly writing, research and communication in several unimaginable ways.[1-3] By quickly generating research manuscripts, AI offers the potential to disseminate valuable insights rapidly, leading to more informed decision-making in clinical settings. However, widespread concerns about the use of AI to generate scholarly text by humans have led to intense debates and several academic institutions have banned its use because of the possible undermining of academic integrity and learning.[4] Ethical issues have been raised about the use of LLMs, and variable stances have been adopted by journals and publishing houses. While some journals such as Accountability in Research,[5]Journal of the American Medical Association (JAMA)[6] and Nature[7] decided to adopt or pursue policies that allow using LLMs under conditions that promote transparency, accountability, fair assignment of credit and honesty, the editors of Science[8] highlighted ethical problems created by LLMs and banned their use citing text generated by AI as a form of plagiarism from the AI model, with the authors contributing precious little to deserve credit for the generated text. Against the backdrop of such a dichotomous opinion vacillating between a ban and cautious permission, it is important to dispassionately discuss whether the use of AI should be banned or transparently acknowledged. The ban on its use is obviously unenforceable as detecting the use of AI or LLM would be extremely difficult, if not impossible. Scholarly journals have to rely on the voluntary disclosures of the authors and their certification as to the accuracy and veracity of the content. Authentic looking but inaccurate outputs from chatbots are not infrequent, and thus, while LLMs help in generating the content, authenticity and accuracy eventually remain the responsibility of the human. Furthermore, ethical principles, including openness, honesty, transparency, efficient use of resources and fair allocation of credit,[9] demand disclosing the use of LLMs. Openness, transparency and honesty about the methods and tools used are paramount to fostering integrity, reproducibility and rigour in research. With respect to fair allocation of credit, not disclosing LLMs, especially those that provide context-specific suggestions and can generate or substantially affect content, violates norms of ethical attribution because it results in giving undue credit to (human) contributors for work they did not do.[10] Lung India in its revised article submission guidelines has mandated the disclosure of the use of AI to foster openness and honesty. Another major controversy is that of the authorship of LLMs. Some researchers have listed LLMs as authors[11,12] and argue that LLMs should be credited with authorships if they make significant contributions to the manuscripts,[11,13] as failing to do so would assign inappropriate credit to humans.[10] Designating an LLM as an author is ethically problematic because widely accepted journal guidelines, such as those provided by the International Committee of Medical Journal Editors (ICMJE) and other research fora, emphasize that authors must be willing to be responsible and accountable for the content of the manuscript. Accountability and credit are two sides of the same coin and enmesh responsibility, and contributors cannot have one without the other.[14,15] Today’s LLMs are neither responsible nor accountable because they lack free will. While they can manipulate linguistic symbols and digital data quite adeptly, they lack self-awareness, consciousness, a human-like understanding of language and values or preferences.[16] Such competencies in LLM may be possible in future, but in the current times, accountability is essential for promoting integrity, reproducibility, rigor and other moral values in research.[9] Because LLMs cannot be held morally or legally responsible or accountable for their actions, the authors are normally required to have contributed to the conception and design of a study, acquisition of data, its analysis, interpretation, drafting of the article or revising for content and final approval for submission. AI tools cannot obviously perform these functions, and as such, credit is clearly inappropriate. Lung India endorses this view, and AI at present cannot be included as an author. Researchers argue that a future scenario considered by some investigators is when LLMs develop to the point where they can explain to a human being what they have done and why. The explainable AI movement seeks to make this type of interaction possible.[17] The transformation of LLM might happen to that level of efficiency sooner than later, but it will probably always be fraught with problems. Although these abilities, once acquired, would take LLMs a step closer to being accountable, they would still fall far short of the degree of accountability we expect from human beings. At present, researchers who fabricate or falsify data can be subject to various forms of punishment,[9] which play an important role in deterring misconduct in research, but punishments cannot affect (let alone deter) LLMs in any way, because they do not have interests, values or feelings. Ethical catastrophes could ensue. Another related issue is that if LLMs cannot be co-authors, should they be mentioned in the acknowledgements section? After all, non-author contributors are typically recognised and acknowledged in research. Recognising non-authors in the acknowledgements section is also supported by widely accepted guidelines for authors. While some[13] endorse, others[18] vehemently oppose this, because ‘it still carries some moral and legal weight and should therefore involve consent’. Researchers argue that we do not credit tools such as PubMed, Web of Science, Google and Bing as acknowledgements even though they do help us generate a lot of our manuscripts, and it may be sufficient to include in the Introduction or Methods section that LLMs were used and for what purpose. To uphold ethical norms of transparency, openness, honesty and fair attribution of credit, in cases where LLMs are used, Lung India believes that disclosures should happen as a free text in the submitted manuscript clearly describing when, how, what prompts were used and how they affected the text so that undue credit to humans is obviated. The application of AI to the creation of medical research manuscripts has enormous potential to advance the field, encourage reproducibility and eliminate publication bias. Instead of replacing human expertise, AI should be seen as a tool to supplement it. While AI can analyse data more quickly and comprehensively than humans can, it lacks the originality and contextual knowledge that human researchers bring to the table. These developments must be balanced, though, with a strong commitment to ethical issues. Realising the true benefits of AI in medical research requires striking the right balance between AI and human expertise, as well as ensuring transparency, mitigating bias and abiding by ethical principles. The human element is crucial in asking critical questions, formulating hypotheses and interpreting results within the broader medical context. Collaboration between AI and human researchers is key to maximising the potential of both parties and achieving the best outcomes. The responsibility for ethical AI implementation lies with both researchers and publishers. Researchers must adhere to robust ethical guidelines while designing, deploying and interpreting AI-generated studies. Full disclosure of the involvement of AI in research should be made in manuscripts, and potential limitations and biases should be explicitly stated. Publishers, however, must establish clear policies for accepting AI-generated manuscripts, emphasising the importance of adherence to ethical principles, transparency and reproducibility. By embracing AI responsibly and ethically, we can usher in a new era of transformative healthcare breakthroughs that will positively impact millions of lives around the world. It is crucial for researchers to acknowledge the potential ethical concerns associated with AI, such as data privacy and algorithmic bias. Additionally, collaboration between researchers, ethicists and policymakers can help establish guidelines and regulations that ensure the responsible use of AI in research. By addressing these concerns and working together, we can maximise the benefits of AI while minimising its potential risks. Disclosures: The author did not use any AI language models for generating the text. Plagiarism and grammar check were performed using ‘Grammarly’ (Grammarly Inc.)

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics in Clinical ResearchAcademic integrity and plagiarism
Volltext beim Verlag öffnen