OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 29.03.2026, 15:02

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The reliability of answers from four different AI chatbots on periodontology theoretical exam questions: an evaluation in dental education

2025·1 Zitationen·BMC Oral HealthOpen Access
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2025

Jahr

Abstract

Dentistry is a profession affected by modern technology, materials, and societal events like the pandemic, leading to an academic field that continuously evolves in both practice and education. Consequently, advancements include the extensive implementation of digital dentistry, the incorporation of remote instruction into dental training throughout the pandemic, and the investigation of optimal AI integration within the dental profession and education are necessary. This study evaluated the reliability of answers provided by four different major artificial intelligence (AI) chatbot using 125 periodontology exam questions administered between 2018−2023. This study used closed-ended questions retrieved from the official archives of the Department of Periodontology, Faculty of Dentistry, Istanbul Aydin University originally included in exams given to 3rd, 4th, and 5th-year students between 2018 and 2023. These questions were then posed to AI chatbots for evaluation. These include 92 of the questions are true/false, 8 are fill-in-the-blank, 22 are multiple-choice, and 3 are calculation questions. Questions were asked to each AI chatbot (ChatGPT-4o mini, ChatGPT-4o, Gemini Advance, and CoPilot Pro) twice, with a one-month interval, and evaluated on a binary scoring system. Before the questions were asked to the AI chatbots, the chat histories and cookies were cleared from the user interfaces, and a previously unused e-mail address was used to log in. The questions were asked one at a time, and the next question was not asked until the previous one was answered. The NCSS (Number Cruncher Statistical System) 2007 (Kaysville, Utah, USA) program was used for statistical analyses. While evaluating the study data, descriptive statistical methods were used. In the comparison of qualitative data across three or more periods, the Cochran’s-Q test was used, and the Mc Nemar test was used for post hoc analyses. Statistical significance level was set at p < 0.01 and p < 0.05 levels. CoPilot Pro achieved the highest accuracy rate both on Day-0 (73.6%) and after one month (75.2%). When comparing the performance of AI chatbots on Day-0 and Month-1, no statistically significant difference was found. However, GPT-4o mini performed significantly worse than the other three AI chatbots at both time points (p < 0.05). The performance of GPT-4o was the most inconsistent, as 19 questions answered correctly in the first round were answered incorrectly in the second round. The findings underscore the need for critical evaluation of AI tools before their adoption in dental education. While AI chatbots can support dental education, their use should be carefully guided and complemented by clinical experience, critical appraisal of information sources, and academic oversight to ensure professional competence and responsible integration into learning processes.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

AI in Service InteractionsArtificial Intelligence in Healthcare and EducationMobile Health and mHealth Applications
Volltext beim Verlag öffnen