Explaining Neural Networks without Access to Training Data
- URL: http://arxiv.org/abs/2206.04891v1
- Date: Fri, 10 Jun 2022 06:10:04 GMT
- Title: Explaining Neural Networks without Access to Training Data
- Authors: Sascha Marton, Stefan L\"udtke, Christian Bartelt, Andrej Tschalzev,
Heiner Stuckenschmidt
- Abstract summary: We consider generating explanations for neural networks in cases where the network's training data is not accessible.
$mathcalI$-Nets have been proposed as a sample-free approach to post-hoc, global model interpretability.
We extend the $mathcalI$-Net framework to the cases of standard and soft decision trees as surrogate models.
- Score: 8.250944452542502
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We consider generating explanations for neural networks in cases where the
network's training data is not accessible, for instance due to privacy or
safety issues. Recently, $\mathcal{I}$-Nets have been proposed as a sample-free
approach to post-hoc, global model interpretability that does not require
access to training data. They formulate interpretation as a machine learning
task that maps network representations (parameters) to a representation of an
interpretable function. In this paper, we extend the $\mathcal{I}$-Net
framework to the cases of standard and soft decision trees as surrogate models.
We propose a suitable decision tree representation and design of the
corresponding $\mathcal{I}$-Net output layers. Furthermore, we make
$\mathcal{I}$-Nets applicable to real-world tasks by considering more realistic
distributions when generating the $\mathcal{I}$-Net's training data. We
empirically evaluate our approach against traditional global, post-hoc
interpretability approaches and show that it achieves superior results when the
training data is not accessible.
Related papers
- Fast, Distribution-free Predictive Inference for Neural Networks with
Coverage Guarantees [25.798057062452443]
This paper introduces a novel, computationally-efficient algorithm for predictive inference (PI)
It requires no distributional assumptions on the data and can be computed faster than existing bootstrap-type methods for neural networks.
arXiv Detail & Related papers (2023-06-11T04:03:58Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - Reasoning-Modulated Representations [85.08205744191078]
We study a common setting where our task is not purely opaque.
Our approach paves the way for a new class of data-efficient representation learning.
arXiv Detail & Related papers (2021-07-19T13:57:13Z) - AI without networks [0.0]
We develop a network-free framework for AI incorporating generative modeling.
We demonstrate this framework with examples from three different disciplines - ethology, control theory, and mathematics.
We also propose an easily computed method of credit assignment based on this framework, to help address ethical-legal challenges raised by generative AI.
arXiv Detail & Related papers (2021-06-07T05:50:02Z) - xRAI: Explainable Representations through AI [10.345196226375455]
We present an approach for extracting symbolic representations of the mathematical function a neural network was supposed to learn from the trained network.
The approach is based on the idea of training a so-called interpretation network that receives the weights and biases of the trained network as input and outputs the numerical representation of the function the network was supposed to learn that can be directly translated into a symbolic representation.
arXiv Detail & Related papers (2020-12-10T22:49:29Z) - Interpretable Neural Networks for Panel Data Analysis in Economics [10.57079240576682]
We propose a class of interpretable neural network models that can achieve both high prediction accuracy and interpretability.
We apply the model to predicting individual's monthly employment status using high-dimensional administrative data.
We achieve an accuracy of 94.5% in the test set, which is comparable to the best performed conventional machine learning methods.
arXiv Detail & Related papers (2020-10-11T18:40:22Z) - A Theory of Usable Information Under Computational Constraints [103.5901638681034]
We propose a new framework for reasoning about information in complex systems.
Our foundation is based on a variational extension of Shannon's information theory.
We show that by incorporating computational constraints, $mathcalV$-information can be reliably estimated from data.
arXiv Detail & Related papers (2020-02-25T06:09:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.