Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring the Performance and Explainability of BERT for Medical Image Protocol Assignment
2
Zitationen
3
Autoren
2023
Jahr
Abstract
Abstract Although deep learning has become state of the art for numerous tasks, it remains untouched for many specialized domains. High stake environments such as medical settings pose more challenges due to trust and safety issues for deep learning algorithms. In this work, we propose to address these issues by evaluating the performance and explanability of a Bidirectional Encoder Representations from Transformers (BERT) model for the task of medical image protocol assignment. Specifically, we evaluate the performance and explainability on this medical image protocol classification task by fine tuning a pre-trained BERT model and measuring the word importance by attributing the classification output to every word through a gradient based method. We then have a trained radiologist review the resulting word importance scores and assess the validity of the model’s decision-making process in comparison to that of a human. Our results indicate that the BERT model is able to identify relevant words that are highly indicative of the target protocol. Furthermore, through the analysis of important words in misclassifications, we are able to reveal potential systematic errors in the model that may be addressed to improve its safety and suitability for use in a clinical setting.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.326 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.691 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.219 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.631 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.413 Zit.