Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bringing Explainability to Deep Learning–Based Clinical Support Decision Systems
0
Zitationen
5
Autoren
2022
Jahr
Abstract
Background: Different deep learning models have been developed to support clinical decision making. However, despite promising results, because of the black box nature of neural networks in general, it is difficult for humans to understand how the various parameters within the models are combined to produce an appropriate classification result. Therefore, reluctance to artificial intelligence based clinical decision support models remains high on patient as well as on physician side. Hence, explainability may become a critical feature for clinical decision support systems to increase user acceptance and may become a legal necessity in near feature.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.