OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.04.2026, 11:13

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Specializing LLMs to Low-Documented Domains with RAG: An Analysis Across Models and Retrieval Depths

2026·0 Zitationen·SN Computer ScienceOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

Abstract Large Language Models (LLMs) are increasingly used to support technical tasks such as software development. However, they often struggle in low-documented or fast-evolving domains, where missing training data leads to inaccurate or incomplete responses. This paper presents a reproducible pipeline based on Retrieval-Augmented Generation to specialize LLMs for such domains by integrating curated external knowledge. We detail a systematic process to build a high-quality Q&A dataset from public instructional sources and developer forums and apply it to the Unity XR Interaction Toolkit (XRIv2) as a case study. We construct a domain-specific benchmark of 101 question-answer pairs based on real learning resources and evaluate five open and proprietary LLMs (GPT-3.5-Turbo, GPT-4o Mini, LLaMA2, LLaMA3, and Mistral) under varying retrieval settings. Results show that standard automatic metrics (e.g., METEOR) struggle to detect quality differences, while LLM-as-a-Judge evaluations reveal significant model-specific improvements as more documents are retrieved. Our findings offer practical guidance for tuning retrieval strategies and highlight the potential for generalizing this approach to other technical domains requiring targeted LLM specialization.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Software Engineering ResearchTopic ModelingArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen