OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.03.2026, 06:51

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Authors’ Reply: Citation Accuracy Challenges Posed by Large Language Models

2025·2 Zitationen·JMIR Medical EducationOpen Access
Volltext beim Verlag öffnen

2

Zitationen

5

Autoren

2025

Jahr

Abstract

Large language models (LLMs) have demonstrated significant potential in academic research but face challenges in generating accurate citations. The issue of hallucinated references—well-formatted but fictitious citations—arises due to LLMs' limited access to subscription-based databases and their reliance on probabilistic text generation. This letter discusses two key approaches to mitigating these issues. First, retrieval-augmented generation (RAG) combined with Hallucination Aware Tuning (HAT) improves citation integrity by integrating external databases and employing hallucination detection models. However, even RAG-HAT systems may still misinterpret source content. Second, we propose the development of “Reference-Accurate” Academic LLMs by major global publishers, which would be trained exclusively on rigorously verified academic literature, ensuring that all citations generated are authentic and traceable. We recommend a dual approach integrating RAG-HAT with publisher-backed academic LLMs, along with human oversight, to enhance AI-assisted scholarly communication. Future research should evaluate the accuracy and reliability of these methods to promote responsible AI use in academia.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical ImagingCOVID-19 diagnosis using AI
Volltext beim Verlag öffnen