OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 02.04.2026, 10:44

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Do Large Language Models Know When They Lack Knowledge?

2026·0 Zitationen·ElectronicsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

Although Large Language Models (LLMs) excel in language tasks, producing fluent and seemingly high-quality text, their outputs are essentially probabilistic predictions rather than verified facts, rendering reliability unguaranteed. This issue is particularly pronounced when models lack the required knowledge, which significantly increases the risk of fabrications and misleading content. Therefore, understanding whether LLMs know when they lack knowledge is of critical importance. This work systematically evaluates leading LLMs on their ability to recognize knowledge insufficiency and examines several training-free techniques to foster this metacognitive capability, referred to as “integrity” throughout this research. For rigorous evaluation, this study firstly develops a new Question-Answering (Q&A) dataset called Honesty. Specifically, events emerging after the model’s deployment are utilized to generate “unknown questions,” ensuring they fall outside LLMs’ knowledge boundaries, while “known questions” are drawn from existing Q&A datasets, together constituting the Honesty dataset. Subsequently, based on this dataset, systematic experiments are conducted using multiple representative LLMs (e.g., GPT-4o and DeepSeek-V3). The results reveal that semantic understanding and reasoning capabilities are the core factors influencing “integrity.” Furthermore, we find that well-crafted prompts markedly improve models’ integrity, and integrating them with probability- or consistency-based uncertainty evaluation methods yields even stronger performance. These findings highlight the considerable potential of LLMs to express uncertainty when they lack knowledge, and we hope these observations can lay the groundwork for developing more reliable models.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Topic ModelingArtificial Intelligence in Healthcare and EducationText Readability and Simplification
Volltext beim Verlag öffnen