Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating Explainable Artificial Intelligence Methods for Interpretable Machine Learning Models in Large Scale Enterprise Data Analytics Systems
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Explainable Artificial Intelligence (XAI) has become a critical area of research within artificial intelligence, focusing on improving the transparency and interpretability of machine learning (ML) models, often referred to as "black-box" models. The need for XAI techniques arises from the inherent complexity of ML models, which can make their decision-making processes difficult for users to understand. This study investigates various XAI techniques, including LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), to assess their impact on model interpretability without significantly compromising predictive performance. A comparative experimental design was used, applying these XAI methods to different ML models, including deep neural networks and ensemble methods, within large-scale enterprise data analytics systems. The results indicate that XAI methods significantly enhance model transparency and decision traceability, allowing users to understand the influence of individual features on predictions. While a slight reduction in predictive accuracy was observed, especially with simpler models, the trade-off between interpretability and performance was deemed acceptable, particularly in fields requiring transparency, such as healthcare, finance, and autonomous systems. The use of XAI in enterprise data systems has practical implications for fostering trust and enabling informed decision-making among stakeholders. Furthermore, the study discusses the challenges and limitations of applying XAI techniques, such as complexity, scalability, and model-specific limitations. Future research is suggested to focus on developing more scalable and efficient XAI methods, enhancing their applicability across various model types, and addressing the challenges of real-time applications. This will be crucial in ensuring the widespread adoption of XAI in critical domains, promoting the ethical use of AI while maintaining predictive accuracy.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.535 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.269 Zit.
"Why Should I Trust You?"
2016 · 14.361 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.153 Zit.