OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.05.2026, 05:55

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A Conceptual Framework for Simulated Self-Assessment and Meta-Evaluation of Generative AI Models

2026·0 Zitationen·AIOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

The increasing integration of generative artificial intelligence (GenAI) into scientific research raises the question of whether such systems can be evaluated not only through external benchmarks but also through structured analysis of their own meta-evaluative responses. This study introduces a conceptual framework for simulated self-assessment of GenAI models, formalized through a multidimensional self-assessment profile and a metacognitive self-assessment index (MSI). The proposed framework integrates quantitative criteria capturing hallucination propensity, knowledge currency, formal-structure handling, source validity, and terminological precision. To evaluate the reliability of model-generated self-assessments, psychometric instruments traditionally used in human metacognition research—MAI, SRIS, and SDQ—are adapted for large language models. Experimental results across multiple GPT models indicate that, despite the absence of genuine introspective mechanisms, GenAI systems can produce internally consistent and moderately calibrated meta-evaluative responses. These findings suggest that simulated self-assessment, when interpreted within a rigorous methodological framework and combined with external validation, can serve as a complementary quantitative tool for trust analysis and reliability assessment of generative models.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Ethics and Social Impacts of AI
Volltext beim Verlag öffnen