Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Making AI Models Intelligible to Experts (Interpretability)
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Artificial intelligence (AI) faces a critical “black box” problem, especially in safety-sensitive domains like healthcare and finance, where transparency is essential. This chapter provides a comprehensive exploration of interpretability, distinguishing it from post-hoc explainability. It covers methods from classical linear models and decision trees to modern techniques like GAMs, EBMs, and deep learning interpretation (saliency maps, Grad-CAM, LRP). The discussion extends to inherently interpretable architectures such as prototype-based, neuro-symbolic, and causal models, and situates interpretability within governance, using case studies and tools like Model Cards. We conclude that interpretability is a multidisciplinary imperative for building powerful, transparent, and human-aligned AI.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.772 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.328 Zit.
"Why Should I Trust You?"
2016 · 14.588 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.205 Zit.