Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Large Language Model (LLM)-Powered Chatbots Fail to Generate Guideline-Consistent Content on Resuscitation and May Provide Potentially Harmful Advice
46
Zitationen
2
Autoren
2023
Jahr
Abstract
The LLM-powered chatbots' advice on help to a non-breathing victim omits essential details of resuscitation technique and occasionally contains deceptive, potentially harmful directives. Further research and regulatory measures are required to mitigate risks related to the chatbot-generated misinformation of public on resuscitation.
Ähnliche Arbeiten
Ventilation with Lower Tidal Volumes as Compared with Traditional Tidal Volumes for Acute Lung Injury and the Acute Respiratory Distress Syndrome
2000 · 12.768 Zit.
Early Goal-Directed Therapy in the Treatment of Severe Sepsis and Septic Shock
2001 · 10.728 Zit.
Acute renal failure – definition, outcome measures, animal models, fluid therapy and information technology needs: the Second International Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) Group
2004 · 6.784 Zit.
Treatment of Comatose Survivors of Out-of-Hospital Cardiac Arrest with Induced Hypothermia
2002 · 5.402 Zit.
Mild Therapeutic Hypothermia to Improve the Neurologic Outcome after Cardiac Arrest
2002 · 5.207 Zit.