OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 29.03.2026, 14:07

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Benchmarking ChatGPT-generated multiple-choice questions against faculty-authored items in dental education

2025·1 Zitationen·Scientific ReportsOpen Access
Volltext beim Verlag öffnen

1

Zitationen

7

Autoren

2025

Jahr

Abstract

Large language models (LLMs) have become an integral part of self-directed student learning. This study aimed to benchmark assessment items generated by ChatGPT against those written by the faculty instructors using the item response theory. A 40-item Oral Medicine assessment was generated for undergraduate dental students using 20 MCQs written by instructors and 20 MCQs were generated by ChatGPT 3.5 and administered to 547 students in four institutions. The results were analyzed using a 3-Parameter Logistic model. The person reliability of ChatGPT items was 0.778 with a Mean Absolute Deviation of Q3 (MADaQ3) of 0.0741 (p < .001). Most items had a guessing parameter of 0.000. The difficulty ranged from -5.57 to 1.64 and discrimination ranged from -0.56 to 3.85. The person reliability of instructor items was 0.845 with a mean absolute deviation of Q3 (MADaQ3) of 0.0698 (p < .001). The guessing parameter was 0.00, while the difficulty ranged from -1.29 to 3.22 and discrimination ranged from 0.50 to 4.64. Overall, the instructor items showed higher average scores and greater variability (instructor mean = 11.6, SD = 5.5 vs ChatGPT mean = 9.01, SD = 4.32). The instructor items exhibited higher discrimination, a wider difficulty range, better alignment with respondent abilities, and consistent fit indices. However, the ChatGPT generated items showed promise and may outperform humans in MCQ generation in the future.

Ähnliche Arbeiten