Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A cross-sectional study assessing AI-generated patient information guides on common cardiovascular conditions
1
Zitationen
7
Autoren
2024
Jahr
Abstract
Background: Patient education is essential for management of CVD as it enables in earlier diagnosis, early treatment and prevention of complications. Artificial intelligence is and increasingly popular resource with applications in virtual patient counselling. Thus, the study aimed to compare the AI generated response for patient education guide on common cardiovascular diseases using ChatGPT and Google Gemini. Methods: The study assessed the responses generated by ChatGPT 3.5 and Google Gemini for patient education brochure on angina, hypertension, and cardiac arrest. Number of words, sentences, average word count per sentence, average syllables per word, grade level, and ease level were assessed using Flesch-Kincaid Calculator, and similarity score was checked using Quillbot. Reliability was assessed using modified DISCERN score. The statistical analysis was done using R version 4.3.2. Results: The statistical analysis exhibited that there were no statistically significant differences between the responses generated by the AI tools based on different variables except for the ease score (p=0.2043), which was statistically superior for ChatGPT. The correlation coefficient between both the two tools was negative for the ease score (r=-0.9986, p=0.0332), the reliability score (r=-0.8660, p=0.3333), but was statistically significant for ease score. Conclusions: The study demonstrated no significant differences between the responses generated by the AI tools for patient education brochures. Further research must be done to assess the ability of the AI tools, and ensure accurate and latest information is being generated, to benefit overall public well-being.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.357 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.221 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.640 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.482 Zit.