OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 04.05.2026, 10:13

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Comparison of the information of generative artificial intelligence large language models and professional guidelines regarding nutritional advice for orthodontic patients

2025·0 Zitationen·BMC Oral HealthOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

BACKGROUND: To evaluate the credibility of large language models (LLMs) compared to American Association of Orthodontists (AAO) and British Orthodontic Society (BOS) guides regarding nutritional guidelines for orthodontic patients. METHODS: The responses offered by ChatGPT 4.0, Copilot and Gemini were assessed for information credibility about tooth decay, food, beverages, oral care, and further assistance as; compatible, compatible but insufficient, partially incompatible and incompatible compared to the guidelines. Reliability was analyzed by the DISCERN tool. The readability of the sources was assessed using the Flesch Reading Ease Score and the Flesch-Kincaid Grade Level. The Friedman Test was conducted to compare DISCERN scores. RESULTS: Responses of LLMs were understandable and compatible with the guidelines, but detailed information on tooth decay, dental plaque and risks of acidic environments were inadequate. ChatGPT 4.0, Copilot and Gemini provided detailed lists of foods to avoid and include. Only AAO suggested being aware of extreme temperatures and usage of sugar-free gums. All sources mentioned the necessity of good oral hygiene, but oral hygiene tools were not mentioned in Copilot. All, except ChatGPT 4.0, recommended orthodontist consultation for personalized advice. BOS leaflet had the highest mean DISCERN score (4.70 ± 0.27), followed by Gemini (4.54 ± 1.03), AAO web-source (4.45 ± 0.75), Copilot (3.87 ± 1.64) and ChatGPT 4.0 (3.08 ± 1.65), revealing no significant difference. BOS and AAO were more readable than the LLMs. ChatGPT 4.0 was more readable among the LLMs but was still found to be difficult for the readers. CONCLUSION: Guidelines have a superior narrative in terms of their detailed content and especially the justifications for the recommendations. Artificial Intelligence (AI)-supported LLMs provided understandable, simple and accurate information, despite lack of some details on certain topics. The readability of the responses from LLMs was difficult. Overall, patients should be advised that pre-trained algorithms should be used with caution as a source of information and that they should receive individual information from their orthodontists.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationOrthodontics and Dentofacial OrthopedicsDental Radiography and Imaging
Volltext beim Verlag öffnen