OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 18:33

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Impact of Explainable AI On EHR-Based Clinical Risk Prediction: A Quantitative Evaluation of Transparency and Diagnostic Accuracy

2024·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2024

Jahr

Abstract

This study quantitatively evaluated the impact of explainable artificial intelligence (XAI) on electronic health record (EHR)-based clinical risk prediction by comparing diagnostic accuracy, probability reliability, transparency stability, and clinician-centered utility within a controlled paired design. A retrospective cohort of 12,480 index patient-encounter observations was constructed from a hospital-based EHR system, with 3,744 encounters allocated to the independent test set. Identical feature engineering pipelines, cohort definitions, and data splits were applied to both non-explainable baseline models and explainable models augmented with structured explanation mechanisms. Diagnostic performance was assessed using discrimination, precision-oriented metrics, calibration summaries, and threshold-based error profiles. Transparency was evaluated using global and local explanation concentration indices and stability measures across 20 repeated training runs and controlled perturbations. The explainable model demonstrated higher discrimination performance (mean 0.826, SD 0.016) compared to the baseline model (mean 0.812, SD 0.018), along with improved precision-sensitive performance (0.447 vs. 0.421). Calibration slope improved from 0.91 to 0.97, and false-negative rate decreased from 18.6% to 16.8% at the prespecified operating threshold. Explanation stability was high, with a mean rank correlation of 0.91 across repeated runs. Clinician-centered evaluation (n = 64) showed strong internal reliability (Cronbach’s alpha range: 0.86–0.91) and high comprehension accuracy (84.3%, SD 6.7). Regression analysis indicated that the explainable condition significantly predicted improved discrimination (B = 0.014, 95% CI 0.006–0.022, p = 0.001) and increased odds of correct classification (OR = 1.28, p = 0.002). Explanation clarity significantly predicted clinician adoption proxy scores (B = 0.37, p < 0.001). Overall, findings demonstrated that explainability integration was associated with modest yet consistent improvements in technical performance, probability reliability, explanation stability, and clinician-facing interpretive outcomes under a rigorously controlled evaluation framework.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen