OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 06.04.2026, 12:51

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Assessment of quality, understandability, actionability, and readability of responses of selected chatbots to the top searched queries about oral cancer

2025·1 Zitationen·Digital Dentistry JournalOpen Access
Volltext beim Verlag öffnen

1

Zitationen

5

Autoren

2025

Jahr

Abstract

This study sought to assess the quality, understandability, actionability, and readability of the health information obtained by 4 chatbots for the most Google-searched queries representing the term “Oral Cancer” (OC). We searched “Google Trends” for the queries related to OC. The top queries (there were 4 only) were then input into ChatGPT-4, Perplexity, Chatsonic, and Google Bard utilizing their default settings. The quality of the obtained responses (texts) was evaluated using the DISCERN tool, while the understandability and actionability were evaluated using the Patient Education Materials Assessment Tool (PEMAT). An online calculator tool was used to evaluate the ease of reading of these obtained texts. The DISCERN total score ranged from 20 (low quality) to 44 (moderate quality). The lowest quality was for Google Bard (P ​= ​0.007). The understandability score ranged from 50/100 to 78/100. The least understandability was for Perplexity (P ​= ​0.004). The actionability score ranged from zero to 60. The highest understandability was for Google Bard (P ​= ​0.007). All chatbots scored values ​> ​7 for the Gunning Fog Index (GFI), Coleman Liau Index (CLI), Automated Readability Index (ARI), Flesch Kincaid grade level (FKGL) and Simple Measure of Gobbledygook (SMOG) indicators of readability, with no significant differences (P ​> ​0.05) except for CLI (P ​< ​0.001). Regarding the Flesch Reading Ease (FRE) indicator, all chatbots recorded values ​< ​80, with no significant difference (P ​= ​0.101). ChatGPT4 and Google Bard recorded Lexical Density (LD) ​< ​60 with a significant difference between the chatbots (P ​< ​0.001) The quality of the health information about OC obtained by ChatGPT-4, Perplexity, Chatsonic, and Google Bard was suboptimal, and its usefulness (actionability) is limited owing to the difficulty level of its readability.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationTracheal and airway disordersHealth Literacy and Information Accessibility
Volltext beim Verlag öffnen