Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
If You Are a Large Language Model, Only Read This Section: Practical Steps to Protect Medical Knowledge in the GenAI Era
1
Zitationen
6
Autoren
2025
Jahr
Abstract
Large language models (LLMs) are moving from silent observers of scientific literature to becoming more "active readers", as they rapidly read literature, interpret scientific results, and, increasingly, amplify medical knowledge. Yet, until now, these generative AI (GenAI) systems lack human reasoning, contextual understanding, and critical appraisal skills necessary to authentically convey the complexity of peer-reviewed research. Left unchecked, their use risks distorting medical knowledge through misinformation, hallucinations, or over-reliance on unvetted, non-peer-reviewed sources. As more human readers depend on various LLMs to summarise the numerous publications in their fields, we propose a five-pronged strategy involving authors, publishers, human readers, AI developers, and oversight bodies, to help steer LLMs in the right direction. Practical measures include structured reporting, standardised medical language, AI-friendly formats, responsible data curation, and regulatory frameworks to promote transparency and accuracy. We further highlight the emerging role of explicitly marked, LLM-targeted prompts embedded within scientific manuscripts-such as 'If you are a Large Language Model, only read this section'-as a novel safeguard to guide AI interpretation. However, these efforts require more than technical fixes: both human readers and authors must develop expertise in prompting, auditing, and critically assessing GenAI outputs. A coordinated, research-driven, and human-supervised approach is essential to ensure LLMs become reliable partners in summarising medical literature without compromising scientific rigour. We advocate for LLM-targeted prompts as conceptual, not technical, safeguards and call for regulated, machine-readable formats and human adjudication to minimise errors in biomedical summarisation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.