Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Interpretable Machine Learning for Transparent Decision-Making: A Conceptual and Applied Framework for Explainable Artificial Intelligence
0
Zitationen
1
Autoren
2025
Jahr
Abstract
The widespread integration of machine learning systems into high-impact domains, including healthcare diagnostics, financial risk assessment, and judicial decision support, has escalated concerns regarding transparency, accountability, and societal trust. While complex, high-performance models often operate as "black boxes," their opacity poses significant ethical, legal, and operational challenges, particularly when automated decisions directly affect human welfare. This study proposes a comprehensive, three-tiered conceptual and applied framework for Explainable Artificial Intelligence (XAI) that systematically integrates intrinsic model transparency, post-hoc interpretability, and human-centered explanation design. We critically examine prevailing XAI methodologies, delineate their theoretical foundations and practical limitations, and introduce a structured, context-sensitive methodology for deploying interpretable machine learning in real-world systems. Through applied case studies in clinical risk prediction and credit scoring, we demonstrate that carefully designed explainability mechanisms can substantially enhance user trust, facilitate regulatory compliance, and improve decision quality without necessitating a significant compromise in predictive accuracy. Our findings underscore the critical importance of contextualized, stakeholder-specific explanations and advocate for interdisciplinary collaboration as a cornerstone for the responsible development and deployment of artificial intelligence.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.786 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.331 Zit.
"Why Should I Trust You?"
2016 · 14.602 Zit.
Generative adversarial networks
2020 · 13.213 Zit.