Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
41: AUTOMATED OBJECT DETECTION IN SIMULATED TRAUMA RESUSCITATION USING COMPUTER VISION MODELS
0
Zitationen
6
Autoren
2026
Jahr
Abstract
Introduction: Combat trauma care occurs in high-stress, chaotic, and time-sensitive environments where accurate and timely documentation is critical. However, this is often delayed, incomplete, or inaccurate, with fewer than 10% of combat casualties in Iraq and Afghanistan having documented pre-hospital care. Automation via computer vision has promise to help automate this task. Yet, performance in resuscitation-specific object and task classification remains modest, with mean average precision around 60%. We aimed to build computer vision models to help label key resuscitation actions in combat trauma scenarios. Methods: The US Military’s medical simulation team recorded helmet-mounted video of combat medics providing trauma care on mannequins. These simulations aim to capture realistic trauma scenarios and provide a foundation for training computer vision models to support automatic documentation in pre-hospital combat care. We used You Only Look Once (YOLOv8) to train object detection models of 20 object classes from 5,000 video frames. We employed a 70/20/10 train/validation/test split. Model performance was measured using mean Average Precision (mAP), the average precision scores across all classes and levels of recall, at an intersection threshold of 0.5. LLMs were used to help coding and writing, with all output reviewed by authors. This work was started at the 2025 SCCM Datathon. Results: The dataset included 63 labeled videos featuring combat medics performing standardized trauma procedures on mannequins. The most common procedures were chest seal applications. The average simulation video length was 533s. The YOLO models were configured to detect key objects (e.g., tourniquets, airway devices). The best-performing model achieved a mAP50 of 48%. Performance varied significantly by class, with the best performance for amputation (mAP50 ≈88%). Conclusions: Helmet-mounted video in simulated trauma care offers a promising avenue for training artificial intelligence systems to improve combat medical documentation. Limitations included the fact that all mannequins were male and had light colored skin. Addressing data diversity, annotation consistency, and simulation-to-reality gaps is essential for advancing automated support in combat trauma documentation.
Ähnliche Arbeiten
A new method of classifying prognostic comorbidity in longitudinal studies: Development and validation
1987 · 49.214 Zit.
Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990–2017: a systematic analysis for the Global Burden of Disease Study 2017
2018 · 13.809 Zit.
Global, regional, and national incidence, prevalence, and years lived with disability for 328 diseases and injuries for 195 countries, 1990–2016: a systematic analysis for the Global Burden of Disease Study 2016
2017 · 13.428 Zit.
The injury severity score: a method for describing patients with multiple injuries and evaluating emergency care.
1974 · 8.023 Zit.
Global, regional, and national incidence, prevalence, and years lived with disability for 310 diseases and injuries, 1990–2015: a systematic analysis for the Global Burden of Disease Study 2015
2016 · 7.322 Zit.