Towards a mathematical framework to inform Neural Network modelling via
Polynomial Regression
- URL: http://arxiv.org/abs/2102.03865v1
- Date: Sun, 7 Feb 2021 17:56:16 GMT
- Title: Towards a mathematical framework to inform Neural Network modelling via
Polynomial Regression
- Authors: Pablo Morala (1), Jenny Alexandra Cifuentes (1), Rosa E. Lillo (1 and
2), I\~naki Ucar (1) ((1) uc3m-Santander Big Data Institute, Universidad
Carlos III de Madrid., (2) Department of Statistics, Universidad Carlos III
de Madrid.)
- Abstract summary: It is shown that almost identical predictions can be made when certain conditions are met locally.
When learning from generated data, the proposed method producess that approximate correctly the data locally.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Even when neural networks are widely used in a large number of applications,
they are still considered as black boxes and present some difficulties for
dimensioning or evaluating their prediction error. This has led to an
increasing interest in the overlapping area between neural networks and more
traditional statistical methods, which can help overcome those problems. In
this article, a mathematical framework relating neural networks and polynomial
regression is explored by building an explicit expression for the coefficients
of a polynomial regression from the weights of a given neural network, using a
Taylor expansion approach. This is achieved for single hidden layer neural
networks in regression problems. The validity of the proposed method depends on
different factors like the distribution of the synaptic potentials or the
chosen activation function. The performance of this method is empirically
tested via simulation of synthetic data generated from polynomials to train
neural networks with different structures and hyperparameters, showing that
almost identical predictions can be obtained when certain conditions are met.
Lastly, when learning from polynomial generated data, the proposed method
produces polynomials that approximate correctly the data locally.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Bayesian polynomial neural networks and polynomial neural ordinary
differential equations [4.550705124365277]
Symbolic regression with neural networks and neural ordinary differential equations (ODEs) are powerful approaches for equation recovery of many science and engineering problems.
These methods provide point estimates for the model parameters and are currently unable to accommodate noisy data.
We address this challenge by developing and validating the following inference methods: the Laplace approximation, Markov Chain Monte Carlo sampling methods, and Bayesian variational inference.
arXiv Detail & Related papers (2023-08-17T05:42:29Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Open- and Closed-Loop Neural Network Verification using Polynomial
Zonotopes [6.591194329459251]
We present a novel approach to efficiently compute tight non-contact activation functions.
In particular, we evaluate the input-output relation of each neuron by an approximation.
This results in a superior performance compared to other methods.
arXiv Detail & Related papers (2022-07-06T14:39:19Z) - Bagged Polynomial Regression and Neural Networks [0.0]
Series and dataset regression are able to approximate the same function classes as neural networks.
textitbagged regression (BPR) is an attractive alternative to neural networks.
BPR performs as well as neural networks in crop classification using satellite data.
arXiv Detail & Related papers (2022-05-17T19:55:56Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - NN2Poly: A polynomial representation for deep feed-forward artificial
neural networks [0.6502001911298337]
NN2Poly is a theoretical approach to obtain an explicit model of an already trained fully-connected feed-forward artificial neural network.
This approach extends a previous idea proposed in the literature, which was limited to single hidden layer networks.
arXiv Detail & Related papers (2021-12-21T17:55:22Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Mean-Field and Kinetic Descriptions of Neural Differential Equations [0.0]
In this work we focus on a particular class of neural networks, i.e. the residual neural networks.
We analyze steady states and sensitivity with respect to the parameters of the network, namely the weights and the bias.
A modification of the microscopic dynamics, inspired by residual neural networks, leads to a Fokker-Planck formulation of the network.
arXiv Detail & Related papers (2020-01-07T13:41:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.