Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Uncertainty quantification in deep learning is unsatisfactory for clinical applications and complex decision making
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Deep learning has significant potential to enhance medical services, but low tolerance for error and the risk of poor performance on unseen data has slowed the widespread adoption of models in practical settings. In principle, uncertainty quantification (UQ) may be used to evaluate the trustworthiness of predictions, facilitating the effective use of models in medical applications. UQ techniques in deep learning aim to reliably express the doubt in a measurement/prediction. However, common UQ techniques and evaluation metrics/measures in deep learning only consider uncertainty reliability as viewed from relatively simple measurement frameworks where contextual factors relevant to complex medical decision making can not be easily integrated. But even for cases where these factors may be quantified and considered, common methods do not achieve/assess all aspects of reliability that are relevant to clinical applications. We describe these shortcomings, and propose research priorities to help improve the effectiveness of UQ for medical applications, and realise the positive impact deep learning could have on patient outcomes.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.396 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.505 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.400 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.334 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.524 Zit.