Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Analysis of Explainable Artificial Intelligence Methods on Medical Image Classification
9
Zitationen
4
Autoren
2023
Jahr
Abstract
The use of deep learning in computer vision tasks such as image classification has led to a rapid increase in the performance of such systems. Due to this substantial increment in the utility of these systems, the use of artificial intelligence in many critical tasks has exploded. In the medical domain, medical image classification systems are being adopted due to their high accuracy and near parity with human physicians in many tasks. However, these artificial intelligence systems are extremely complex and are considered ‘black boxes’ by scientists, due to the difficulty in interpreting what exactly led to the predictions made by these models. When these systems are being used to assist high-stakes decision-making, it is extremely important to be able to understand, verify and justify the conclusions reached by the model. The research techniques being used to gain insight into the black-box models are in the field of explainable artificial intelligence (XAI). In this paper, we evaluated three different XAI methods across two convolutional neural network models trained to classify lung cancer from histopathological images. We visualized the outputs and analyzed the performance of these methods, in order to better understand how to apply explainable artificial intelligence in the medical domain.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.474 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.262 Zit.
"Why Should I Trust You?"
2016 · 14.326 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.143 Zit.