Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Interpretable machine learning text classification for clinical computed tomography reports – a case study of temporal bone fracture
8
Zitationen
6
Autoren
2023
Jahr
Abstract
Machine learning (ML) has demonstrated success in classifying patients’ diagnostic outcomes in free-text clinical notes. However, due to the machine learning model's complexity, interpreting the mechanism behind classification results remains difficult. We investigated interpretable representations of text-based machine learning classification models. We created machine learning models to classify temporal bone fractures based on 164 temporal bone computed tomography (CT) text reports. We adopted the XGBoost, Support Vector Machine, Logistic Regression, and Random Forest algorithms. To interpret models, we used two major methodologies: (1) We calculated the average word frequency score (WFS) for keywords. The word frequency score shows the frequency gap between positively and negatively classified cases. (2) We used Local Interpretable Model-Agnostic Explanations (LIME) to show the word-level contribution to bone fracture classification. In temporal bone fracture classification, the random forest model achieved an average F1-score of 0.93. WFS revealed a difference in keyword usage between fracture and non-fracture cases. Additionally, LIME visualized the keywords' contributions to the classification results. The evaluation of LIME-based interpretation achieved the highest interpreting accuracy of 0.97. The interpretable text explainer can improve physicians' understanding of machine learning predictions. By providing simple visualization, our model can increase the trust of computerized models. Our model supports more transparent computerized decision-making in clinical settings.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.307 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.679 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.411 Zit.