Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Protocol for Evaluating Explainability in Actuarial Models
1
Zitationen
3
Autoren
2025
Jahr
Abstract
This paper explores the use of explainable artificial intelligence (XAI) techniques in actuarial science to address the opacity of advanced machine learning models in financial contexts. While technological advancements have enhanced actuarial models, their black box nature poses challenges in highly regulated environments. This study proposes a protocol for selecting and applying XAI techniques to improve interpretability, transparency, and regulatory compliance. It categorizes techniques based on origin, target, and interpretative capacity, and introduces a protocol to identify the most suitable method for actuarial models. The proposed protocol is tested in a case study involving two classification algorithms, gradient boosting and random forest, with accuracy of 0.80 and 0.79, focusing on two explainability objectives. Several XAI techniques are analyzed, with results highlighting partial dependency variance (PDV) and local interpretable model-agnostic explanations (LIME) as effective tools for identifying key variables. The findings demonstrate that the protocol aids in model selection, internal audits, regulatory compliance, and enhanced decision-making transparency. These advantages make it particularly valuable for improving model governance in the financial sector.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.452 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.258 Zit.
"Why Should I Trust You?"
2016 · 14.307 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.136 Zit.