Closed-Form Interpretation of Neural Network Classifiers with Symbolic
Regression Gradients
- URL: http://arxiv.org/abs/2401.04978v1
- Date: Wed, 10 Jan 2024 07:47:42 GMT
- Title: Closed-Form Interpretation of Neural Network Classifiers with Symbolic
Regression Gradients
- Authors: Sebastian Johann Wetzel
- Abstract summary: In contrast to neural network-based regression, for classification, it is in general impossible to find a one-to-one mapping from the neural network to a symbolic equation.
I embed a trained neural network into an equivalence class of classifying functions that base their decisions on the same quantity.
I interpret neural networks by finding an intersection between this equivalence class and human-readable equations defined by the search space of symbolic regression.
- Score: 0.7832189413179361
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: I introduce a unified framework for interpreting neural network classifiers
tailored toward automated scientific discovery. In contrast to neural
network-based regression, for classification, it is in general impossible to
find a one-to-one mapping from the neural network to a symbolic equation even
if the neural network itself bases its classification on a quantity that can be
written as a closed-form equation. In this paper, I embed a trained neural
network into an equivalence class of classifying functions that base their
decisions on the same quantity. I interpret neural networks by finding an
intersection between this equivalence class and human-readable equations
defined by the search space of symbolic regression. The approach is not limited
to classifiers or full neural networks and can be applied to arbitrary neurons
in hidden layers or latent spaces or to simplify the process of interpreting
neural network regressors.
Related papers
- Closed-Form Interpretation of Neural Network Latent Spaces with Symbolic Gradients [0.0]
We introduce a framework for finding closed-form interpretations of neurons in latent spaces of artificial neural networks.
The interpretation framework is based on embedding trained neural networks into an equivalence class of functions that encode the same concept.
arXiv Detail & Related papers (2024-09-09T03:26:07Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Taming Binarized Neural Networks and Mixed-Integer Programs [2.7624021966289596]
We show that binarized neural networks admit a tame representation.
This makes it possible to use the framework of Bolte et al. for implicit differentiation.
This approach could also be used for a broader class of mixed-integer programs.
arXiv Detail & Related papers (2023-10-05T21:04:16Z) - Neural Networks are Decision Trees [0.0]
We show that any neural network having piece-wise linear activation functions can be represented as a decision tree.
The representation is equivalence and not an approximation, thus keeping the accuracy of the neural network exactly as is.
arXiv Detail & Related papers (2022-10-11T06:49:51Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - The Representation Theory of Neural Networks [7.724617675868718]
We show that neural networks can be represented via the mathematical theory of quiver representations.
We show that network quivers gently adapt to common neural network concepts.
We also provide a quiver representation model to understand how a neural network creates representations from the data.
arXiv Detail & Related papers (2020-07-23T19:02:14Z) - Towards Understanding Hierarchical Learning: Benefits of Neural
Representations [160.33479656108926]
In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks.
We show that neural representation can achieve improved sample complexities compared with the raw input.
Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
arXiv Detail & Related papers (2020-06-24T02:44:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.