Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The emergence and need for explainable AI
3
Zitationen
1
Autoren
2023
Jahr
Abstract
Artificial Intelligence (AI) systems, particularly deep learning models, have revolutionized numerous sectors with their unprecedented performance capabilities. However, the intricate structures of these models often result in a "black-box" characterization, making their decisions difficult to understand and trust. Explainable AI (XAI) emerges as a solution, aiming to unveil the inner workings of complex AI systems. This paper embarks on a comprehensive exploration of prominent XAI techniques, evaluating their effectiveness, comprehensibility, and robustness across diverse datasets. Our findings highlight that while certain techniques excel in offering transparent explanations, others provide a cohesive understanding across varied models. The study accentuates the importance of crafting AI systems that seamlessly marry performance with interpretability, fostering trust and facilitating broader AI adoption in decision-critical domains.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.488 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.263 Zit.
"Why Should I Trust You?"
2016 · 14.333 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.147 Zit.