Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI for Decision-Making: A Hybrid Approach to Trustworthy Computing
0
Zitationen
6
Autoren
2025
Jahr
Abstract
In the evolving landscape of intelligent systems, ensuring transparency, fairness, and trust in artificial intelligence (AI) decision-making is paramount. This study presents a hybrid Explainable AI (XAI) framework that integrates rule-based models with deep learning techniques to enhance interpretability and trustworthiness in critical computing environments. The proposed system employs Layer-Wise Relevance Propagation (LRP) and SHAP (SHapley Additive exPlanations) for local and global interpretability, respectively, while leveraging a Convolutional Neural Network (CNN) backbone for accurate decision-making across diverse domains, including healthcare, finance, and cybersecurity. The hybrid model achieved an average accuracy of 94.3%, a precision of 91.8%, and an F1-score of 93.6%, while maintaining a computation overhead of only 6.7% compared to standard deep learning models. The trustworthiness index, computed based on interpretability, robustness, and fairness metrics, reached 92.1%, demonstrating significant improvement over traditional black-box models.This work underscores the importance of explainability in AI-driven decision-making and provides a scalable, domain-agnostic solution for trustworthy computing. The results confirm that integrating explainability mechanisms does not compromise performance and can enhance user confidence, regulatory compliance, and ethical AI deployment
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.800 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.335 Zit.
"Why Should I Trust You?"
2016 · 14.610 Zit.
Generative adversarial networks
2020 · 13.218 Zit.