OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 29.03.2026, 11:22

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Protocol for Evaluating Explainability in Actuarial Models

2025·1 Zitationen·ElectronicsOpen Access
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2025

Jahr

Abstract

This paper explores the use of explainable artificial intelligence (XAI) techniques in actuarial science to address the opacity of advanced machine learning models in financial contexts. While technological advancements have enhanced actuarial models, their black box nature poses challenges in highly regulated environments. This study proposes a protocol for selecting and applying XAI techniques to improve interpretability, transparency, and regulatory compliance. It categorizes techniques based on origin, target, and interpretative capacity, and introduces a protocol to identify the most suitable method for actuarial models. The proposed protocol is tested in a case study involving two classification algorithms, gradient boosting and random forest, with accuracy of 0.80 and 0.79, focusing on two explainability objectives. Several XAI techniques are analyzed, with results highlighting partial dependency variance (PDV) and local interpretable model-agnostic explanations (LIME) as effective tools for identifying key variables. The findings demonstrate that the protocol aids in model selection, internal audits, regulatory compliance, and enhanced decision-making transparency. These advantages make it particularly valuable for improving model governance in the financial sector.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Machine Learning in HealthcareArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen