Universal Approximation Theorem for Neural Networks
- URL: http://arxiv.org/abs/2102.10993v1
- Date: Fri, 19 Feb 2021 08:25:24 GMT
- Title: Universal Approximation Theorem for Neural Networks
- Authors: Takato Nishijima
- Abstract summary: The "Universal Approximation Theorem for Neural Networks" states that a neural network is dense in a certain function space under an appropriate setting.
This paper is a comprehensive explanation of the universal approximation theorem for feedforward neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Is there any theoretical guarantee for the approximation ability of neural
networks? The answer to this question is the "Universal Approximation Theorem
for Neural Networks". This theorem states that a neural network is dense in a
certain function space under an appropriate setting. This paper is a
comprehensive explanation of the universal approximation theorem for
feedforward neural networks, its approximation rate problem (the relation
between the number of intermediate units and the approximation error), and
Barron space in Japanese.
Related papers
- Universal approximation theorem for neural networks with inputs from a topological vector space [0.0]
We study feedforward neural networks with inputs from a topological vector space (TVS-FNNs)
Unlike traditional feedforward neural networks, TVS-FNNs can process a broader range of inputs, including sequences, matrices, functions and more.
arXiv Detail & Related papers (2024-09-19T17:10:14Z) - Universal Approximation Theorem for Vector- and Hypercomplex-Valued Neural Networks [0.3686808512438362]
The universal approximation theorem states that a neural network with one hidden layer can approximate continuous functions on compact sets.
It is valid for real-valued neural networks and some hypercomplex-valued neural networks.
arXiv Detail & Related papers (2024-01-04T13:56:13Z) - Universal Approximation and the Topological Neural Network [0.0]
A topological neural network (TNN) takes data from a Tychonoff topological space instead of the usual finite dimensional space.
A distributional neural network (DNN) that takes Borel measures as data is also introduced.
arXiv Detail & Related papers (2023-05-26T05:28:10Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Extending the Universal Approximation Theorem for a Broad Class of
Hypercomplex-Valued Neural Networks [1.0323063834827413]
The universal approximation theorem asserts that a single hidden layer neural network approximates continuous functions with any desired precision on compact sets.
This paper extends the universal approximation theorem for a broad class of hypercomplex-valued neural networks.
arXiv Detail & Related papers (2022-09-06T12:45:15Z) - On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks [91.3755431537592]
We study how random pruning of the weights affects a neural network's neural kernel (NTK)
In particular, this work establishes an equivalence of the NTKs between a fully-connected neural network and its randomly pruned version.
arXiv Detail & Related papers (2022-03-27T15:22:19Z) - On the space of coefficients of a Feed Forward Neural Network [0.0]
We prove that given a neural network $mathcalN$ with piece-wise linear activation, the space of coefficients describing all equivalent neural networks is given by a semialgebraic set.
This result is obtained by studying different representations of a given piece-wise linear function using the Tarski-Seidenberg theorem.
arXiv Detail & Related papers (2021-09-07T22:47:50Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - Interval Universal Approximation for Neural Networks [47.767793120249095]
We introduce the interval universal approximation (IUA) theorem.
IUA shows that neural networks can approximate any continuous function $f$ as we have known for decades.
We study the computational complexity of constructing neural networks that are amenable to precise interval analysis.
arXiv Detail & Related papers (2020-07-12T20:43:56Z) - Generalization bound of globally optimal non-convex neural network
training: Transportation map estimation by infinite dimensional Langevin
dynamics [50.83356836818667]
We introduce a new theoretical framework to analyze deep learning optimization with connection to its generalization error.
Existing frameworks such as mean field theory and neural tangent kernel theory for neural network optimization analysis typically require taking limit of infinite width of the network to show its global convergence.
arXiv Detail & Related papers (2020-07-11T18:19:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.