OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.03.2026, 22:27

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Uncertainty quantification in deep learning is unsatisfactory for clinical applications and complex decision making

2026·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

Deep learning has significant potential to enhance medical services, but low tolerance for error and the risk of poor performance on unseen data has slowed the widespread adoption of models in practical settings. In principle, uncertainty quantification (UQ) may be used to evaluate the trustworthiness of predictions, facilitating the effective use of models in medical applications. UQ techniques in deep learning aim to reliably express the doubt in a measurement/prediction. However, common UQ techniques and evaluation metrics/measures in deep learning only consider uncertainty reliability as viewed from relatively simple measurement frameworks where contextual factors relevant to complex medical decision making can not be easily integrated. But even for cases where these factors may be quantified and considered, common methods do not achieve/assess all aspects of reliability that are relevant to clinical applications. We describe these shortcomings, and propose research priorities to help improve the effectiveness of UQ for medical applications, and realise the positive impact deep learning could have on patient outcomes.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Adversarial Robustness in Machine LearningMachine Learning in HealthcareArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen