OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.04.2026, 15:27

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable AI via Large Language Models: Translating Neural Network Behavior into Interpretable Decision Trees

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

Deep learning models achieve state-of-the-art performance across various domains but suffer from a lack of interpretability. We propose a novel approach that leverages large language models (LLMs), such as GPT-4, to generate human-readable decision trees that approximate and explain the behavior of complex black-box neural networks. Our method facilitates transparent model auditing and knowledge distillation by transforming neural predictions into decision rules via zero-shot prompting. Experiments on healthcare and finance datasets show that the LLM-generated trees achieve high fidelity with the underlying models while improving interpretability.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen