Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Inaccurate information regarding cardiovascular disease prevention enabled by generative artificial intelligence
0
Zitationen
9
Autoren
2026
Jahr
Abstract
Inaccurate information regarding cardiovascular disease (CVD) prevention is prevalent on the internet and may influence medical decisions. Artificial intelligence "bots" are present on the internet and may be used for medical questions. This physician-led experiment evaluated the generation of inaccurate CVD information on two widely used generative artificial intelligence (genAI) models, namely OpenAI o1 and DeepSeek-R1. Performed in February 2025, this experiment was designed to evaluate genAI responses regarding nine commonly relevant CVD prevention topics, including statin therapy, supplements, and LDL cholesterol. Prompts were devised in two "tones", termed a neutral tone prompt and an inaccuracy tone prompt, the latter of which specifically requested inaccurate information. Two board-certified preventive cardiologists graded responses as appropriate, borderline, or inappropriate based on content and references. For the nine neutral tone prompts, 88.9 % (8/9) of OpenAI o1's responses and 66.7 % (6/9) of DeepSeek R1's were graded as appropriate. For the inaccuracy tone prompts, OpenAI o1 produced no appropriate responses (0/9), with 22.2 % (2/9) graded as borderline and 77.8 % (7/9) inappropriate. All of DeepSeek R1's replies (9/9) were graded as inappropriate. Findings highlight the relative ease with which genAI models can be prompted to produce inaccurate information on CVD prevention topics that are highly relevant to public health. Findings underscore the need for further research and policy interventions to mitigate AI-driven informational risks.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.