Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Argumentation approaches for explanaible AI in medical informatics
49
Zitationen
3
Autoren
2022
Jahr
Abstract
Artificial Intelligence algorithms are powerful in performing accurate predictions, but they are often considered black boxes as they do not provide any explanation about how outputs are derived from inputs or why a decision is taken. Therefore, urgent is the need of a completely transparent and eXplainable Artificial Intelligence (XAI) as also recognized by the explicit inclusion of the right to explanation in the General Data Protection Regulation (GDPR). There has been much study on diagnosis, decision support, and interpretability, and there is significant interest in the development of Explainable AI in the realm of medicine. Interpretability in the medical field is not just an intectual curiosity, but a key factor. Medical choices impact the life of patients, and include risk and responsibility for the clinicians. This proposal investigates the benefit of using logic approaches for eXplainable AI by evidencing how their natural characteristics of explainability and expressiveness help in the design of ethical, explainable and justified intelligent systems. More specifically, the paper focuses on a detailed topic related to the use of argumentation theory in Medical Informatics by overviewing existing approaches in the literature. The overview categorizes approaches on the basis of the specific purpose the argumentation is used for, into the following categories: Argumentation for Medical Decision Making, Argumentation for Medical Explanations and Argumentation for Medical Dialogues.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.464 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.259 Zit.
"Why Should I Trust You?"
2016 · 14.315 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.138 Zit.