Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparison of human–AI agreement in ASA scoring by gender and duration of clinical experience: a real-world study
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Accurate preoperative risk identification is critical for patient safety and postoperative outcomes. Anaesthesiologists make decisions on the basis of ASA classification and additional parameters. Artificial intelligence (AI)-based decision support may offer more objective judgments. In this retrospective multi-rater study, four anaesthesiologists and an AI system independently evaluated 1,000 cases. ASA class, postoperative ICU requirement, anaesthesia preference, intraoperative risk prediction, and additional recommendations were assessed. Concordance was analysed using Krippendorff’s alpha, Cohen’s kappa, Gwet’s AC2, and PABAK, with percentage agreement estimated by bootstrapping. AI–physician agreement was further examined using fixed-effects logistic regression including clinician sex and professional experience as covariates. Physician–physician agreement was generally good to excellent across outcomes, whereas physician–AI agreement was lower and variable when assessed using κ, PABAK, Gwet’s AC2, and observed agreement (Pₒ). The highest AI concordance was observed for intraoperative risk prediction and ICU requirement, while the lowest was for anaesthesia preference. Exploratory analyses suggested that AI–physician concordance may vary by clinician experience and sex; no significant effects of sex or experience were observed for intraoperative anaesthesia-related risk prediction. Although AI shows high concordance with physician decisions in objective/algorithmic domains, concordance remains limited in contextual and experience-based domains (anaesthesia preference). The findings support positioning AI as a safe ‘second eye/warning’ tool within human-in-the-loop workflows, rather than as an independent authority. Prospective, externally validated studies are needed.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.