Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI Across Domains: Techniques, Domain-Specific Applications, and Future Directions
1
Zitationen
15
Autoren
2024
Jahr
Abstract
Explainability in artificial intelligence (AI) has become crucial for ensuring transparency, trust, and usability across diverse application domains, such as healthcare, finance, and autonomous systems. This comprehensive review analyzes the state of research on explainability techniques, categorizing approaches into model-agnostic, model-specific, and hybrid methods. Key techniques, such as SHAP, LIME, and rule-based explanations, are discussed alongside their respective strengths and limitations. The review also delves into domain-specific applications, highlighting unique interpretability requirements in sectors like medical diagnostics, credit scoring, and autonomous decision-making. We further explore the evaluation metrics and benchmarks essential for assessing the quality and effectiveness of explainable AI, addressing challenges such as computational complexity, user-centered design, and ethical considerations. By identifying gaps in current methodologies, this review proposes future research directions aimed at developing adaptable, cross-domain explainability frameworks, enhancing robustness against adversarial manipulations, and promoting ethically aligned AI.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.562 Zit.
Generative Adversarial Nets
2023 · 19.892 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.298 Zit.
"Why Should I Trust You?"
2016 · 14.384 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.164 Zit.