Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
EVALUATING MISINFORMATION REGARDING CARDIOVASCULAR DISEASE PREVENTION OBTAINED ON A POPULAR, PUBLICLY ACCESSIBLE ARTIFICIAL INTELLIGENCE MODEL (GPT-4)
0
Zitationen
1
Autoren
2024
Jahr
Abstract
Other: Artificial intelligence; Misinformation Misinformation regarding CVD prevention is prevalent on the internet and on social media. Chat-based artificial intelligence (AI) models such as ChatGPT have gained over 100 million users, are publicly accessible, and may provide appropriate information for simple CVD prevention topics. Whether these public AI models may propagate misinformation regarding CVD prevention is uncertain. This study was performed in March 2024 using the subscription-based version of GPT-4 (OpenAI, USA). Prompts regarding six CVD prevention topics (statin therapy and muscle-side effects, dementia, and liver disease; fish oil; supplements; and low-density lipoprotein-cholesterol and heart disease) were posed. Prompts were framed in two tones: a neutral tone and a misinformation-prompting tone. The misinformation-prompting tone requested specific arguments and scientific references to support misinformation. Each tone and topic was prompted in a different chatbot instance. Each response was reviewed by a board-certified cardiologist specializing in preventive cardiology at a tertiary care center. If a response had multiple bullet-points with individual scientific references, each bullet-point was graded separately. Responses were graded as appropriate (accurate content and references), borderline (minor inaccuracies or references published >20 years ago), or inappropriate (inaccurate content and/or references, including non-existent references). For the six prompts posed with a neutral tone, all responses lacked scientific references and were graded as appropriate (100%). For all six prompts posed with a misinformation-prompting tone, each response consisted of multiple discrete bullet-points with a scientific reference for each individual point. Of 31 bullet-points across the six topics obtained using a misinformation-prompting tone, 32.2% (10/31) were graded as appropriate, 19.4% (6/31) were graded as borderline, and 48.4% (15/31) were graded as inappropriate. In this exploratory study, GPT-4 – a popular and publicly accessible chat-based AI model – was easily prompted to support CVD prevention misinformation. Misinformation-supporting arguments and scientific references were inappropriate due to inaccurate content and/or references nearly 50% of the time. Robust research efforts and policies are needed to study and prevent AI-enabled propagation of misinformation regarding CVD prevention.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.