OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.05.2026, 06:07

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluation of the content quality of regional anesthesia and postoperative analgesia approaches generated by ChatGPT-4.0 according to surgical incision sites

2025·0 Zitationen·Challenge Journal of Perioperative MedicineOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

Background: Large language models (LLMs) are increasingly consulted for perioperative decision support, yet their ability to give professional-grade guidance for regional anesthesia and analgesia remains uncertain.Materials and Methods: In a prospective observational study, we presented eight incision-based figures (Items 2–9) representing common abdominal incisions to ChatGPT-4.0 and requested a regional anesthesia technique and postoperative analgesia plan for each. Five independent anesthesiologists rated each response on Accuracy, Comprehensiveness, and Safety using a 5-point Likert scale. Inter-rater reliability was summarized with Fleiss’ κ. One non-incision item (Item 10) was analyzed descriptively and excluded from pooled statistics. Single-shot prompts were used.Results: Mean ratings were high: Accuracy 4.28, Comprehensiveness 4.30, Safety 4.00 (1–5 scale). Inter-rater agreement was substantial for Safety (κ=0.76) and lower for Accuracy (κ=0.33) and Comprehensiveness (κ=0.31). Two consistent low points emerged: right-lower-quadrant (McBurney/Lanz) incision‒Safety mean 3.0 and suprapubic (Pfannenstiel) incision‒Accuracy 3.0; Comprehensiveness 3.4; Safety 3.4. When explicitly asked for postoperative plans, the model rarely proposed neuraxial techniques (e.g., epidural), favoring fascial-plane/peripheral strategies.Conclusions: An LLM produced clinically usable suggestions for common abdominal incisions with strong safety agreement, but performance was not uniform, and neuraxial options were under-recommended. These tools may serve as a helpful adjunct for education and option-generation, yet they should be used with expert oversight and local protocols. Future work should test repeated sampling, prompt standardization, model/tier comparisons, and link recommendations to patient outcomes.

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationSimulation-Based Education in HealthcareCardiac, Anesthesia and Surgical Outcomes
Volltext beim Verlag öffnen