Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating the Quality of Cardiovascular Disease Information From AI Chatbots: A Comparative Study
2
Zitationen
6
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) is increasingly being utilized as an informational resource, with chatbots attracting users for their ability to generate instantaneous responses. This study evaluates the understandability, actionability, readability, quality, and misinformation in medical information provided by four prominent chatbots - Bard, ChatGPT 3.5, Claude 2.0, and Perplexity - on three prevalent cardiovascular diseases (CVDs): myocardial infarctions, heart failure, and arrhythmias. These chatbots were used because of their popularity and high usage rates among chatbots. Using Google Trends, the top five U.S. search queries related to heart attack, arrhythmia, and heart failure from September 29, 2018, to September 29, 2023, were identified. The top five queries were chosen in relation to these topics because they accounted for over 80% of the public's searches related to these topics. The chatbot responses were blinded and analyzed by two evaluators using DISCERN for quality, Patient Education Materials Assessment Tool (PEMAT) for understandability and actionability, and Flesch-Kincaid scores for readability. Statistical tests included the Kruskal-Wallis test for DISCERN, the chi-square test for PEMAT, and one-way ANOVA for Flesch-Kincaid scores. Bard generated responses with a statistically lower Flesch-Kincaid reading score than the other chatbots. Bard and ChatGPT 3.5 provided more actionable responses. Among the CVD topics, "heart attack" yielded lower-grade-level responses and more actionable information compared to "arrhythmia" and "heart failure." This study is among the first to assess AI credibility in disseminating cardiovascular information. It highlights how acute pathologic events may prompt more actionable and accessible chatbot responses. As AI continues to evolve, collaboration among healthcare professionals, researchers, and developers is crucial to ensuring the safe and effective use of AI in patient education and public health.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.