Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable machine learning for predicting postoperative length of stay after gastrectomy: a nationwide study using XGBoost and SHAP
1
Zitationen
6
Autoren
2025
Jahr
Abstract
Background: Gastric cancer remains a major cause of cancer-related morbidity and mortality. Despite advances in surgical and perioperative care, prolonged hospitalization continues to strain healthcare systems. Predicting postoperative length of stay (LOS) could support personalized care and efficient resource allocation. Japan's nationwide Diagnosis Procedure Combination (DPC) database provides real-world data for large-scale analysis, but no study has applied machine learning to predict LOS after gastrectomy. Methods: This retrospective study included 26,097 patients who underwent gastrectomy between 2017 and 2022 at 472 hospitals in Japan. Using XGBoost, we developed a predictive model based on 1,433 admission-time variables extracted from the DPC database. Model performance was evaluated using Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) in a five-fold cross-validation. SHAP values were used to interpret feature importance. Results: The final model achieved an RMSE of 3.74 and MAE of 2.82 days. Key predictors of LOS included surgical procedure (laparoscopic distal gastrectomy and open total gastrectomy), designated cancer hospital, hospital size, peritoneal dissemination, and admission ADL score. SHAP analysis revealed that Laparoscopic distal gastrectomy and higher hospital volume were associated with shorter LOS, while open total gastrectomy was associated with longer LOS. Conclusions: We developed a machine learning model that predicts postoperative length of stay with an error range of 2-4 days using admission data. This proof-of-concept study demonstrates the feasibility of predicting length of stay from admission data, showing that explainable AI can replicate intuitive patterns in surgical oncology while simultaneously identifying unexpected insights from administrative data. These findings highlight the clinical potential of explainable AI for perioperative workflow optimization.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.693 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.598 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.124 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.871 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.