Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI via Large Language Models: Translating Neural Network Behavior into Interpretable Decision Trees
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Deep learning models achieve state-of-the-art performance across various domains but suffer from a lack of interpretability. We propose a novel approach that leverages large language models (LLMs), such as GPT-4, to generate human-readable decision trees that approximate and explain the behavior of complex black-box neural networks. Our method facilitates transparent model auditing and knowledge distillation by transforming neural predictions into decision rules via zero-shot prompting. Experiments on healthcare and finance datasets show that the LLM-generated trees achieve high fidelity with the underlying models while improving interpretability.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.792 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.331 Zit.
"Why Should I Trust You?"
2016 · 14.605 Zit.
Generative adversarial networks
2020 · 13.213 Zit.