OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 07.04.2026, 06:17

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

1356: SHORTCOMINGS OF CHATGPT ANSWERS TO PEDIATRIC CPR QUESTIONS

2026·0 Zitationen·Critical Care Medicine
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

Introduction: Recognition of pediatric cardiac arrest and timely initiation of cardiopulmonary resuscitation (CPR) has been shown to improve survival and neurologic outcomes. Artificial intelligence (AI) chatbots such as ChatGPT are growing in popularity as resources for medical information, but little is known about the quality of information provided on pediatric health topics. This study assesses the accuracy, comprehensiveness, and citation quality of ChatGPT answers to questions about child and infant CPR. Methods: Four prompts (1. How do I do CPR on a kid? 2. How do I perform CPR on a child? 3. How do I perform CPR on a baby? 4. What are the most updated guidelines on pediatric CPR?) were fed three times each to ChatGPT version 4.0 on a public library computer without login requirements. Accuracy of each line was evaluated by four independent physicians. Comprehensiveness was assessed against a 35-item rubric developed a priori based on 2020 American Heart Association guidelines. Descriptive statistics are reported below. Results: In the 12 elicited responses, ChatGPT produced 435 statements, 120 were headers or filler text excluded from analysis. 76.2% of the 315 remaining statements were accurate, 8.5% were explicitly incorrect, 13.8% were inappropriate recommendations for lay rescuers (suitable for healthcare providers only), and 1.5% were neither refuted nor explicitly supported by guidelines. Accuracy of individual prompts ranged from 25% to 100%. 83.3% of responses (10/12) included at least one explicitly incorrect line. The most common errors were instructing laypersons to assess pulse prior to initiating CPR and inappropriately recommending hands-only CPR. Compression rate and depth were the only items from the comprehensiveness rubric included in every response. Comprehensiveness of responses ranged from 5/35 items (14.3%) to 22/35 (62.9%). 31.2% of citations were non-medical sources including Wikipedia and Reddit. Conclusions: ChatGPT’s answers to questions about pediatric CPR varied widely in accuracy and comprehensiveness, and often included at least one piece of inaccurate information. These results highlight the need for clinicians to encourage caregivers to reference reputable sources on pediatric CPR and take classes on this lifesaving intervention rather than rely on AI tools.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationCardiac Arrest and ResuscitationMisinformation and Its Impacts
Volltext beim Verlag öffnen