Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Authors’ Reply: Citation Accuracy Challenges Posed by Large Language Models
2
Zitationen
5
Autoren
2025
Jahr
Abstract
Large language models (LLMs) have demonstrated significant potential in academic research but face challenges in generating accurate citations. The issue of hallucinated references—well-formatted but fictitious citations—arises due to LLMs' limited access to subscription-based databases and their reliance on probabilistic text generation. This letter discusses two key approaches to mitigating these issues. First, retrieval-augmented generation (RAG) combined with Hallucination Aware Tuning (HAT) improves citation integrity by integrating external databases and employing hallucination detection models. However, even RAG-HAT systems may still misinterpret source content. Second, we propose the development of “Reference-Accurate” Academic LLMs by major global publishers, which would be trained exclusively on rigorously verified academic literature, ensuring that all citations generated are authentic and traceable. We recommend a dual approach integrating RAG-HAT with publisher-backed academic LLMs, along with human oversight, to enhance AI-assisted scholarly communication. Future research should evaluate the accuracy and reliability of these methods to promote responsible AI use in academia.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.324 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.189 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.588 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.470 Zit.