Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Building Trust and Transparency in AI: A Review of Explainable AI and its Ethical Implications
2
Zitationen
3
Autoren
2024
Jahr
Abstract
As Artificial Intelligence (AI) becomes increasingly integrated into critical sectors, the need for transparency and trust in AI systems has grown significantly. This paper presents a systematic review of Explainable AI (XAI) and its role in aligning AI development with human values, particularly addressing the ethical concerns surrounding fairness, accountability and bias. XAI refers to a set of techniques and methods used to make the decision-making processes of AI systems more transparent and understandable to humans. XAI differs from conventional AI methods by empowering users to monitor and interpret AI outputs, fostering trust, and mitigating concerns about opaque decision-making processes. By focusing on the intersection of XAI and human-centered design, this review highlights the potential of XAI to enhance the ethical use of AI, contributing to the creation of transparent, responsible, and socially beneficial AI systems. The study also examines how XAI can support the ethical dimensions of sustainable development goals by driving the responsible development of AI technologies for sustainable infrastructures. Our findings emphasize that incorporating human values into AI design can promote organizational transparency, build public trust, and align AI behaviour with societal expectations. This review contributes to the broader literature on ethical AI, suggesting that future AI systems should perform effectively and act as trustworthy decision-makers that adhere to core societal principles.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.474 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.262 Zit.
"Why Should I Trust You?"
2016 · 14.326 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.143 Zit.