Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Transformative impact of explainable artificial intelligence: bridging complexity and trust
6
Zitationen
5
Autoren
2025
Jahr
Abstract
Artificial Intelligence and Deep Learning have gained widespread popularity in all sectors and industries from healthcare to finance and industrial management. Explainable Artificial Intelligence (XAI) is urgent need to bridge the gap between the needs of society interpretability, and trust while maximizing AI benefits. This review XAI methodologies is presented as a comprehensive analysis of three different types model including model-specific, model-agnostic, and hybrid, along with their applications. The review discussed sectors of healthcare, finance, and industrial management etc. where XAI can be utilized for better results and gain trust. The generic and prominent key challenges in terms of trade-offs between accuracy and interpretability, the existing scalability issues, and ethical considerations were focused. The paper also discussed future directions, such as domain-specific frameworks interdisciplinary collaborations and standardized evaluation metrics, to be proposed for advancing XAI research and applications. The review highlighted the potential of XAI for upbringing a society equipped with modern AI with precise results, high responsibility and more transparency.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.403 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.251 Zit.
"Why Should I Trust You?"
2016 · 14.281 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.129 Zit.