OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 03.05.2026, 22:18

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable AI for Decision-Making: A Hybrid Approach to Trustworthy Computing

2025·0 Zitationen·International Journal of Computational and Experimental Science and EngineeringOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

In the evolving landscape of intelligent systems, ensuring transparency, fairness, and trust in artificial intelligence (AI) decision-making is paramount. This study presents a hybrid Explainable AI (XAI) framework that integrates rule-based models with deep learning techniques to enhance interpretability and trustworthiness in critical computing environments. The proposed system employs Layer-Wise Relevance Propagation (LRP) and SHAP (SHapley Additive exPlanations) for local and global interpretability, respectively, while leveraging a Convolutional Neural Network (CNN) backbone for accurate decision-making across diverse domains, including healthcare, finance, and cybersecurity. The hybrid model achieved an average accuracy of 94.3%, a precision of 91.8%, and an F1-score of 93.6%, while maintaining a computation overhead of only 6.7% compared to standard deep learning models. The trustworthiness index, computed based on interpretability, robustness, and fairness metrics, reached 92.1%, demonstrating significant improvement over traditional black-box models.This work underscores the importance of explainability in AI-driven decision-making and provides a scalable, domain-agnostic solution for trustworthy computing. The results confirm that integrating explainability mechanisms does not compromise performance and can enhance user confidence, regulatory compliance, and ethical AI deployment

Ähnliche Arbeiten