An Analytical Approach to Compute the Exact Preimage of Feed-Forward
Neural Networks
- URL: http://arxiv.org/abs/2203.00438v2
- Date: Thu, 3 Mar 2022 07:15:52 GMT
- Title: An Analytical Approach to Compute the Exact Preimage of Feed-Forward
Neural Networks
- Authors: Th\'eo Nancy, Vassili Maillet, Johann Barbier
- Abstract summary: This study is to give a method to compute the exact preimage of any Feed-Forward Neural Network with linear or piecewise linear activation functions for hidden layers.
In contrast to other methods, this one is not returning a unique solution for a unique output but returns analytically the entire and exact preimage.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks are a convenient way to automatically fit functions that are
too complex to be described by hand. The downside of this approach is that it
leads to build a black-box without understanding what happened inside. Finding
the preimage would help to better understand how and why such neural networks
had given such outputs. Because most of the neural networks are noninjective
function, it is often impossible to compute it entirely only by a numerical
way. The point of this study is to give a method to compute the exact preimage
of any Feed-Forward Neural Network with linear or piecewise linear activation
functions for hidden layers. In contrast to other methods, this one is not
returning a unique solution for a unique output but returns analytically the
entire and exact preimage.
Related papers
- Residual Random Neural Networks [0.0]
Single-layer feedforward neural network with random weights is a recurring motif in the neural networks literature.
We show that one can obtain good classification results even if the number of hidden neurons has the same order of magnitude as the dimensionality of the data samples.
arXiv Detail & Related papers (2024-10-25T22:00:11Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Null Space Properties of Neural Networks with Applications to Image
Steganography [6.063583864878311]
The null space of a given neural network can tell us the part of the input data that makes no contribution to the final prediction.
One application described here leads to a method of image steganography.
arXiv Detail & Related papers (2024-01-01T03:32:28Z) - Instance-wise Linearization of Neural Network for Model Interpretation [13.583425552511704]
The challenge can dive into the non-linear behavior of the neural network.
For a neural network model, the non-linear behavior is often caused by non-linear activation units of a model.
We propose an instance-wise linearization approach to reformulates the forward computation process of a neural network prediction.
arXiv Detail & Related papers (2023-10-25T02:07:39Z) - A max-affine spline approximation of neural networks using the Legendre
transform of a convex-concave representation [0.3007949058551534]
This work presents a novel algorithm for transforming a neural network into a spline representation.
The only constraint is that the function be bounded and possess a well-define second derivative.
It can also be performed over the whole network rather than on each layer independently.
arXiv Detail & Related papers (2023-07-16T17:01:20Z) - Why do CNNs excel at feature extraction? A mathematical explanation [53.807657273043446]
We introduce a novel model for image classification, based on feature extraction, that can be used to generate images resembling real-world datasets.
In our proof, we construct piecewise linear functions that detect the presence of features, and show that they can be realized by a convolutional network.
arXiv Detail & Related papers (2023-07-03T10:41:34Z) - Provable Data Subset Selection For Efficient Neural Network Training [73.34254513162898]
We introduce the first algorithm to construct coresets for emphRBFNNs, i.e., small weighted subsets that approximate the loss of the input data on any radial basis function network.
We then perform empirical evaluations on function approximation and dataset subset selection on popular network architectures and data sets.
arXiv Detail & Related papers (2023-03-09T10:08:34Z) - Towards Understanding Hierarchical Learning: Benefits of Neural
Representations [160.33479656108926]
In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks.
We show that neural representation can achieve improved sample complexities compared with the raw input.
Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
arXiv Detail & Related papers (2020-06-24T02:44:54Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.