Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Unified Explain Ability Score (UES): A Comprehensive Framework for Evaluating Trustworthy AI Models
0
Zitationen
2
Autoren
2025
Jahr
Abstract
In today’s scenario, artificial intelligence systems are mostly used in critical decision-making processes, but at the same time, the need for effective and reliable explanations of their output is required more than before. While various metrics exist to evaluate explain ability, they often focus on isolated aspects such as trustworthiness, clarity, or fidelity, which can lead to incomplete assessments. In this paper, we have introduced a novel Composite Explain Ability Metric (CEM) which is designed to evaluate the quality of explanations given by XAi Methods in different domains and contexts. We are integrating key dimensions of explain ability like faithfulness, interpretability, robustness, action ability, and timeliness by which CEM provides a unified framework and it eases the effectiveness of explanations. We have prepared a systematic approach to assign relative weights to each metric so that context-specific adjustment could be possible, further reflecting the unique demands of different domains like healthcare, finance, etc. The proposed framework also includes a normalization process which ensures the comparability between metrics and helps to aggregate the scores to a comprehensive explain ability assessment. We have validated our metric using simulation and real-world applications, which shows how our framework helps to provide meaningful insights into XAi. Our finding highlights the importance of standardized evaluation metrics to foster trust and transparency which is a further step towards the development of responsible AI in a high-stakes environment. This work addresses the gap available between evaluations of XAi methods and also contributes to the ongoing discourse on trustworthiness and accountability in AI technologies.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.474 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.262 Zit.
"Why Should I Trust You?"
2016 · 14.326 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.143 Zit.