Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence–enabled Decision Support in Surgery
63
Zitationen
13
Autoren
2023
Jahr
Abstract
OBJECTIVE: To summarize state-of-the-art artificial intelligence-enabled decision support in surgery and to quantify deficiencies in scientific rigor and reporting. BACKGROUND: To positively affect surgical care, decision-support models must exceed current reporting guideline requirements by performing external and real-time validation, enrolling adequate sample sizes, reporting model precision, assessing performance across vulnerable populations, and achieving clinical implementation; the degree to which published models meet these criteria is unknown. METHODS: Embase, PubMed, and MEDLINE databases were searched from their inception to September 21, 2022 for articles describing artificial intelligence-enabled decision support in surgery that uses preoperative or intraoperative data elements to predict complications within 90 days of surgery. Scientific rigor and reporting criteria were assessed and reported according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews guidelines. RESULTS: Sample size ranged from 163-2,882,526, with 8/36 articles (22.2%) featuring sample sizes of less than 2000; 7 of these 8 articles (87.5%) had below-average (<0.83) area under the receiver operating characteristic or accuracy. Overall, 29 articles (80.6%) performed internal validation only, 5 (13.8%) performed external validation, and 2 (5.6%) performed real-time validation. Twenty-three articles (63.9%) reported precision. No articles reported performance across sociodemographic categories. Thirteen articles (36.1%) presented a framework that could be used for clinical implementation; none assessed clinical implementation efficacy. CONCLUSIONS: Artificial intelligence-enabled decision support in surgery is limited by reliance on internal validation, small sample sizes that risk overfitting and sacrifice predictive performance, and failure to report confidence intervals, precision, equity analyses, and clinical implementation. Researchers should strive to improve scientific quality.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.697 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.602 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.127 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.872 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Autoren
Institutionen
- American College of Surgeons(US)
- University of Florida Health(US)
- University of Pennsylvania(US)
- Stanford University(US)
- Harvard University(US)
- Hadassah Medical Center(IL)
- Holyoke Community College(US)
- Medical University of South Carolina(US)
- Intuitive Surgical (United States)(US)
- Vanderbilt University Medical Center(US)
- University of Kentucky(US)
- University of Minnesota(US)
- Institute for Medical Informatics and Biostatistics(CH)