OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.03.2026, 04:16

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Can generative AI reliably synthesise literature? exploring hallucination issues in ChatGPT

2025·19 Zitationen·AI & SocietyOpen Access
Volltext beim Verlag öffnen

19

Zitationen

2

Autoren

2025

Jahr

Abstract

Abstract This study evaluates the capabilities and limitations of generative AI, specifically ChatGPT, in conducting systematic literature reviews. Using the PRISMA methodology, we analysed 124 recent studies, focusing in-depth on a subset of 40 selected through strict inclusion criteria. Findings show that ChatGPT can enhance efficiency, with reported workload reductions averaging around 60–65%, though accuracy varies widely by task and context. In structured domains such as clinical research, title and abstract screening sensitivity ranged from 80.6% to 96.2%, while precision dropped as low as 4.6% in more interpretive tasks. Hallucination rates reached 91%, underscoring the need for careful oversight. Comparative analysis shows that AI matches or exceeds human performance in simple screening but underperforms in nuanced synthesis. To support more reliable integration, we introduce the Systematic Research Processing Framework (SRPF) as a guiding model for hybrid AI–human collaboration in research review workflows.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Machine Learning in Healthcare
Volltext beim Verlag öffnen