Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Machine learning for predicting burnout among healthcare workers: a systematic review and meta-analysis
2
Zitationen
5
Autoren
2025
Jahr
Abstract
BACKGROUND: Burnout among healthcare workers (HCWs) is a major occupational health challenge, with detrimental consequences for both staff well-being and patient care. Machine learning (ML) offers potential for early detection and prevention, but evidence synthesis on its predictive performance and applicability is lacking. AIMS: To systematically evaluate the performance, methodological quality, and clinical applicability of ML models for predicting burnout in HCWs. DESIGN: Systematic review and meta-analysis. METHODS: Ten databases (PubMed, Web of Science, Cochrane Library, Embase, CINAHL, PsycINFO, Scopus, China National Knowledge Infrastructure, Chinese Biomedical Literature Database, and Wanfang) were searched for studies published from inception to 13 February 2025. Eligible studies developed or validated ML models for HCW burnout prediction, using clinically validated tools (e.g. Maslach Burnout Inventory). Two reviewers independently extracted data and assessed study quality using the Prediction Model Risk of Bias Assessment Tool for Artificial Intelligence (PROBAST-AI). Pooled area under the receiver operating characteristic curve (AUC), sensitivity, and specificity were calculated using a random-effects model. Subgroup analyses explored heterogeneity. RESULTS: < 0.0001). Key predictors clustered into five categories: demographic/occupational, psychological/behavioral, organizational/social, physiological/wearable, and activity/work patterns. All studies showed high or unclear risk of bias in at least one PROBAST-AI domain. CONCLUSIONS: ML models show promise for predicting burnout in HCWs but are limited by methodological weaknesses, heterogeneity, and lack of external validation. Advancing this field requires rigorous design, transparent reporting, multimodal data integration, and ethical safeguards to enable trustworthy clinical use.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.687 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.591 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.114 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.867 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.