OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 09.05.2026, 03:55

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Comparative Analysis of ChatGPT and Google Gemini in Generating Patient Educational Resources on Cardiac Health: A Focus on Exercise-Induced Arrhythmia, Sleep Habits, and Dietary Habits

2025·3 Zitationen·CureusOpen Access
Volltext beim Verlag öffnen

3

Zitationen

6

Autoren

2025

Jahr

Abstract

INTRODUCTION: Patient education is crucial in cardiovascular health, aiding in shared decision-making and improving adherence to treatments. Artificial intelligence (AI) tools, including ChatGPT (OpenAI, San Francisco, CA) and Google Gemini (Google LLC, Mountain View, CA), are revolutionizing patient education by providing personalized, round-the-clock access to information, enhancing engagement, and improving health literacy. The paper aimed to compare the responses generated by ChatGPT and Google Gemini for creating patient education guides on exercise-induced arrhythmia, sleep habits and cardiac health, and "dietary habits and cardiac health. METHODOLOGY: A comparative observational study was conducted evaluating three AI-generated guides: "exercise-induced arrhythmia," "sleep habits and cardiac health," and "dietary habits and cardiac health," using ChatGPT and Google Gemini. Responses were evaluated for word count, sentence count, grade level, ease score, and readability using the Flesch-Kincaid calculator and QuillBot (QuillBot, Chicago, IL) plagiarism tool for similarity score. Reliability was assessed with the modified DISCERN score. Statistical analysis was conducted using R version 4.3.2 (The R Core Team, R Foundation for Statistical Computing, Vienna, Austria). RESULTS: ChatGPT-generated responses had an overall higher average word count when compared to Google Gemini; however, the difference was not statistically significant (p = 0.2817). Google Gemini scored higher on ease of understanding, though this difference was also not significant (p = 0.7244). There were no significant differences in sentence count or average words per sentence. ChatGPT tended to produce more complex content for certain topics, whereas Google Gemini's responses were generally easier to read. Similarity scores were higher for ChatGPT across all topics, while reliability scores varied by topic, with Google Gemini performing better for exercise-induced arrhythmia and ChatGPT for sleep habits and cardiac health. CONCLUSIONS: The study found no significant difference in ease score, grade score, and reliability between AI-generated responses for a cardiology disorders brochure. Future research should explore AI techniques across various disorders, ensuring up-to-date and reliable public information.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMobile Health and mHealth ApplicationsDigital Mental Health Interventions
Volltext beim Verlag öffnen