Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Is ChatGPT a Sufficient and Readable Help Tool for the Most Frequently Asked Questions in General Dentistry?
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Purpose: Artificial intelligence (AI)-enabled systems such as ChatGPT provide benefits in the field of dentistry in many areas such as patient education, counseling, appointment management and professional development. The correct and effective use of such technologies can improve the experience of both patients and dentists. The aim of this study was to determine the accuracy and readability of ChatGPT responses to common patient questions about general dentistry. Materials and Methods: The most frequently asked questions by patients were collected from web-based tools. The ability to provide accurate and relevant information was determined subjectively by two observers using a 5-point Likert scale and objectively by comparing the responses with the Clinical Practice Guidelines and Dental Evidence published by the American Dental Association (ADA) and the literature. Readability was assessed using Simple Measure of Gobbledygook (SMOG), Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES). Results: ChatGPT produced responses above the recommended level for the average patient (SMOG: 17.91; FRES: 43.98; FKGL: 10.29). The mean Likert score was 4.55, indicating that most responses were correct except for minor inaccuracies or missing information. FKGL and FRES readability scores correspond to a difficult reading level for patients seeking answers to general dental questions. Conclusion: ChatGPT has the potential to be a helpful and decision supportive tool for patients. However, ChatGPT should not replace dentists, because incorrect and/or incomplete answers can negatively impact patient care.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.