Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The reliability of answers from four different AI chatbots on periodontology theoretical exam questions: an evaluation in dental education
1
Zitationen
3
Autoren
2025
Jahr
Abstract
Dentistry is a profession affected by modern technology, materials, and societal events like the pandemic, leading to an academic field that continuously evolves in both practice and education. Consequently, advancements include the extensive implementation of digital dentistry, the incorporation of remote instruction into dental training throughout the pandemic, and the investigation of optimal AI integration within the dental profession and education are necessary. This study evaluated the reliability of answers provided by four different major artificial intelligence (AI) chatbot using 125 periodontology exam questions administered between 2018−2023. This study used closed-ended questions retrieved from the official archives of the Department of Periodontology, Faculty of Dentistry, Istanbul Aydin University originally included in exams given to 3rd, 4th, and 5th-year students between 2018 and 2023. These questions were then posed to AI chatbots for evaluation. These include 92 of the questions are true/false, 8 are fill-in-the-blank, 22 are multiple-choice, and 3 are calculation questions. Questions were asked to each AI chatbot (ChatGPT-4o mini, ChatGPT-4o, Gemini Advance, and CoPilot Pro) twice, with a one-month interval, and evaluated on a binary scoring system. Before the questions were asked to the AI chatbots, the chat histories and cookies were cleared from the user interfaces, and a previously unused e-mail address was used to log in. The questions were asked one at a time, and the next question was not asked until the previous one was answered. The NCSS (Number Cruncher Statistical System) 2007 (Kaysville, Utah, USA) program was used for statistical analyses. While evaluating the study data, descriptive statistical methods were used. In the comparison of qualitative data across three or more periods, the Cochran’s-Q test was used, and the Mc Nemar test was used for post hoc analyses. Statistical significance level was set at p < 0.01 and p < 0.05 levels. CoPilot Pro achieved the highest accuracy rate both on Day-0 (73.6%) and after one month (75.2%). When comparing the performance of AI chatbots on Day-0 and Month-1, no statistically significant difference was found. However, GPT-4o mini performed significantly worse than the other three AI chatbots at both time points (p < 0.05). The performance of GPT-4o was the most inconsistent, as 19 questions answered correctly in the first round were answered incorrectly in the second round. The findings underscore the need for critical evaluation of AI tools before their adoption in dental education. While AI chatbots can support dental education, their use should be carefully guided and complemented by clinical experience, critical appraisal of information sources, and academic oversight to ensure professional competence and responsible integration into learning processes.
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.632 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.562 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.548 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.351 Zit.