Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence and academic publishing
27
Zitationen
1
Autoren
2023
Jahr
Abstract
Never trust anything that can think for itself if you can't see where it keeps its brain. —J.K. Rowling, Harry Potter and the Chamber of Secrets, 1998 Artificial intelligence (AI) has revolutionized many aspects of our lives, from healthcare to entertainment. But what about academic publishing? AI tools such as ChatGPT (OpenAI, San Francisco, California) and Google Bard (Alphabet, Inc., Mountain View, California) can help researchers conduct literature reviews, write manuscripts, and generate references with ease. However, these tools also pose serious ethical challenges for the academic community. One of the main challenges is plagiarism. How can we ensure that the content generated by AI is original and not copied from existing sources? How can we detect and prevent AI-generated plagiarism, especially when it is imperceptible to human readers and antiplagiarism software? How can we protect the intellectual property rights of the authors and publishers when AI can reproduce their work without permission? Another challenge is authorship. Who should be credited as the author of an AI-generated manuscript? Does AI meet the criteria for authorship, such as making substantial contributions, approving the final version, and being accountable for its accuracy and integrity? How can we acknowledge the role of AI in the writing process without compromising the credibility and reputation of human authors? A third challenge is quality. How can we ensure that the content generated by AI is reliable, valid, and relevant? How can we evaluate and peer review AI-generated manuscripts, especially when they may contain errors, biases, or misinformation? How can we maintain the standards and expectations of academic publishing when AI can produce large volumes of content with minimal human input? These challenges require urgent attention and action from researchers, publishers, editors, reviewers, and policymakers. We need to develop clear and consistent guidelines for using AI in academic publishing, such as declaring and explaining its use, acknowledging its limitations, and verifying its sources. We also need to create robust and transparent mechanisms for detecting and addressing AI-related misconduct, such as plagiarism, fabrication, or falsification. Moreover, we need to foster a culture of ethical awareness and responsibility among researchers who use AI tools, such as educating them about the potential risks and benefits, encouraging them to critically assess their outputs, and reminding them to respect the values and norms of academic publishing. AI has enormous potential to enhance and accelerate scientific communication, but it also poses significant perils that cannot be ignored or underestimated. We must be vigilant and proactive in ensuring that AI is used in a responsible and ethical manner that respects the integrity and quality of academic publishing. Now for a disclosure. The entirety of the text above was generated using a free and nearly ubiquitous browser, Microsoft Edge (Microsoft Corp., Redmond, Washington). Microsoft began offering a version of the generative AI engine ChatGPT in combination with its Bing search engine in February 2023. The text appeared seconds after I typed "perils of generative AI in academic publishing" as a prompt in the "Compose" section of the Microsoft Edge sidebar and selected "Blog" for the writing style. Not a word was changed, and the only addition I made was to add the company locations after each of the cited AI technologies. I would argue that the text could have stood alone as an editorial on the key issues that dominate this topic. I would argue even more strongly that it would be difficult for anyone to differentiate this text from the spontaneous musings of a journal editor. Although generative AI is not new, the remarkable increase in accessibility of generative tools in the past 6 months and the accompanying frenzy of AI-related media stories has catapulted the subject to the forefront of public discourse, perhaps most acutely in the spheres of education and academic publishing. Education at all levels places a premium on the learning process as integral to personal development. This is a domain where "show your work" and "explain your answer" are valued above simply providing an answer. Education is about growth: growth in intellectual prowess, yes, but also growth in character, which often occurs through the stress and strain of a nonlinear path that treats successes and failures as learning opportunities. In academia, the themes of originality, innovation, attribution, and intellectual property are core tenants of the reward system and the honor code. Formalized processes exist for ascribing credit to people for their ideas and work. Where, then, does generative AI fit in? How can it be used—because it WILL be used—in ways that avoid compromising core ideals? And, how can it be harnessed to accelerate learning and enhance discovery of ideas and attributions that would otherwise be overlooked? Discourse between stakeholders that prioritizes listening and cultivating consensus on the purpose and core values of the enterprise will be crucial for managing the risks and rewards of AI, whether the area of practice is education, publishing, code writing, or art. A realistic appraisal of what AI is and what it can do is essential. Any intelligence that an AI engine possesses is a characteristic of the designer and is dependent on the finite information that is used to build it. At a time in history when technology can increasingly be confused with personhood, the value of the people and the human processes that birthed these technologies should always be the measure against which policy is measured.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.