Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
From Evidence-based Endodontics to Generative AI: A Comparative Study of 11 Large Language Models
0
Zitationen
6
Autoren
2026
Jahr
Abstract
INTRODUCTION: Generative large language models (LLMs) are increasingly used in dentistry, yet their guideline-based diagnostic accuracy and reproducibility remain uncertain. Position statements from the American Association of Endodontists and the European Society of Endodontology provide rigorous, evidence-based standards, making them an ideal benchmark to assess alignment of LLM outputs with endodontic best practices. METHODS: This study was conducted according to Transparent Reporting of a Multivariable Model for Individual Prognosis or Diagnosis-Large Language Models guidelines, evaluated 11 LLMs: ChatGPT 5, ChatGPT 4o, ChatGPT o3, Gemini 2.5 Flash, Gemini 2.5 Pro, Claude Sonnet 4, Claude Opus 4, Perplexity R1 1776, Perplexity Sonar, DeepSeek, and DeepSeek DeepThink R1. Sixty multiple-choice questions derived from American Association of Endodontists and European Society of Endodontology position statements were administered to each model in 5 rounds, generating 3300 responses. The primary outcome was all-correct accuracy and the secondary outcome was intra-model consistency. Comparisons were performed with chi-square tests and Bonferroni adjustment. RESULTS: = 50.56, df = 10, P < .001). ChatGPT 4o and Claude Opus 4 achieved 95.0% accuracy, followed by ChatGPT 5, Claude Sonnet 4, Gemini 2.5 Flash, and Gemini 2.5 Pro (93.3%), and ChatGPT o3 (90.0%). DeepSeek DeepThink R1 scored 86.7%, Perplexity R1 1776 83.3%, Perplexity Sonar 81.7%, and DeepSeek 63.3%. Consistency exceeded 90% for most models, peaking at 98.3% for top performers but falling to 75.0% for DeepSeek. CONCLUSIONS: Most LLMs demonstrated high accuracy and reproducibility when benchmarked against authoritative endodontic guidelines. Despite notable progress over earlier generations, performance variability and confidently incorrect outputs highlight the need for rigorous validation and expert oversight before clinical integration.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.644 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.550 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.061 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.850 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.