Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluation of ChatGPT’s Accuracy, Repeatability, and Reasoning Ability in Prosthodontics Education: A Cross-Sectional Comparative Study with Prosthodontists
0
Zitationen
7
Autoren
2026
Jahr
Abstract
Background: The integration of artificial intelligence (AI) tools like ChatGPT in dental education is increasing, yet their accuracy, reasoning quality, and reliability remain underexplored in specialized fields like prosthodontics. This study aimed to evaluate the performance of ChatGPT in answering prosthodontics-based questions by comparing its accuracy with that of experienced Prosthodontists, as well as assessing its repeatability and reasoning ability. Material and Methods: A cross-sectional observational study was conducted using 36 validated prosthodontics-based questions, categorized by difficulty (easy, medium, hard) and type (theoretical, clinical). Responses were obtained from a panel of Prosthodontists via Google Form and from ChatGPT 4-o mini version, twice daily for 15 days. Each group generated 1080 responses. Accuracy of ChatGPT's responses was compared with Prosthodontists' responses. ChatGPT's reliability was assessed using Intraclass Correlation Coefficient (ICC), Standard Error of Measurement (SEM), and Coefficient of Variation (CV). Five subject matter experts rated ChatGPT's reasoning quality on a 3-point Likert scale, and Pearson correlation was used to analyze the relationship between reasoning and accuracy. Results: Prosthodontists outperformed ChatGPT in overall accuracy (p < 0.05), with significant differences observed particularly for medium-difficulty and clinical questions. ChatGPT demonstrated fair reliability (ICC = 0.427), with SEM of 25.18 and CV of 61.7% indicating moderate variability. Reasoning analysis showed that 38.9% of ChatGPT's responses were rated strong, while 36.1% were rated poor. A significant positive correlation was found between reasoning quality and accuracy (r = 0.353, p = 0.035). Conclusions: ChatGPT demonstrates moderate ability in delivering accurate theoretical information but lacks consistency and clinical judgment. Its role should be limited to a supplementary aid in dental education, with expert oversight required to ensure accuracy and contextual relevance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.697 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.602 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.127 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.872 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.