Approximation with Neural Networks in Variable Lebesgue Spaces
- URL: http://arxiv.org/abs/2007.04166v1
- Date: Wed, 8 Jul 2020 14:52:48 GMT
- Title: Approximation with Neural Networks in Variable Lebesgue Spaces
- Authors: \'Angela Capel and Jes\'us Oc\'ariz
- Abstract summary: This paper concerns the universal approximation property with neural networks in variable Lebesgue spaces.
We show that, whenever the exponent function of the space is bounded, every function can be approximated with shallow neural networks with any desired accuracy.
- Score: 1.0152838128195465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper concerns the universal approximation property with neural networks
in variable Lebesgue spaces. We show that, whenever the exponent function of
the space is bounded, every function can be approximated with shallow neural
networks with any desired accuracy. This result subsequently leads to determine
the universality of the approximation depending on the boundedness of the
exponent function. Furthermore, whenever the exponent is unbounded, we obtain
some characterization results for the subspace of functions that can be
approximated.
Related papers
- Universal approximation results for neural networks with non-polynomial activation function over non-compact domains [3.3379026542599934]
We derive universal approximation results for neural networks within function spaces over non-compact subsets of a Euclidean space.
We provide some dimension-independent rates for approximating a function with sufficiently regular and integrable Fourier transform by neural networks with non-polynomial activation function.
arXiv Detail & Related papers (2024-10-18T09:53:20Z) - Dimension-independent learning rates for high-dimensional classification
problems [53.622581586464634]
We show that every $RBV2$ function can be approximated by a neural network with bounded weights.
We then prove the existence of a neural network with bounded weights approximating a classification function.
arXiv Detail & Related papers (2024-09-26T16:02:13Z) - Global universal approximation of functional input maps on weighted
spaces [3.8059763597999012]
We introduce so-called functional input neural networks defined on a possibly infinite dimensional weighted space with values also in a possibly infinite dimensional output space.
We prove a global universal approximation result on weighted spaces for continuous functions going beyond the usual approximation on compact sets.
We emphasize that the reproducing Hilbert kernel space of the signature kernels are Cameron-Martin spaces of certain Gaussian processes.
arXiv Detail & Related papers (2023-06-05T23:06:32Z) - Approximation and interpolation of deep neural networks [0.0]
In the overparametrized regime, deep neural network provide universal approximations and can interpolate any data set.
In the last section, we provide a practical probabilistic method of finding such a point under general conditions on the activation function.
arXiv Detail & Related papers (2023-04-20T08:45:16Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Optimal Approximation Complexity of High-Dimensional Functions with
Neural Networks [3.222802562733787]
We investigate properties of neural networks that use both ReLU and $x2$ as activation functions.
We show how to leverage low local dimensionality in some contexts to overcome the curse of dimensionality, obtaining approximation rates that are optimal for unknown lower-dimensional subspaces.
arXiv Detail & Related papers (2023-01-30T17:29:19Z) - Sobolev-type embeddings for neural network approximation spaces [5.863264019032882]
We consider neural network approximation spaces that classify functions according to the rate at which they can be approximated.
We prove embedding theorems between these spaces for different values of $p$.
We find that, analogous to the case of classical function spaces, it is possible to trade "smoothness" (i.e., approximation rate) for increased integrability.
arXiv Detail & Related papers (2021-10-28T17:11:38Z) - Deep neural network approximation of analytic functions [91.3755431537592]
entropy bound for the spaces of neural networks with piecewise linear activation functions.
We derive an oracle inequality for the expected error of the considered penalized deep neural network estimators.
arXiv Detail & Related papers (2021-04-05T18:02:04Z) - Interval Universal Approximation for Neural Networks [47.767793120249095]
We introduce the interval universal approximation (IUA) theorem.
IUA shows that neural networks can approximate any continuous function $f$ as we have known for decades.
We study the computational complexity of constructing neural networks that are amenable to precise interval analysis.
arXiv Detail & Related papers (2020-07-12T20:43:56Z) - Minimum Width for Universal Approximation [91.02689252671291]
We prove that the minimum width required for the universal approximation of the $Lp$ functions is exactly $maxd_x+1,d_y$.
We also prove that the same conclusion does not hold for the uniform approximation with ReLU, but does hold with an additional threshold activation function.
arXiv Detail & Related papers (2020-06-16T01:24:21Z) - Space of Functions Computed by Deep-Layered Machines [74.13735716675987]
We study the space of functions computed by random-layered machines, including deep neural networks and Boolean circuits.
Investigating the distribution of Boolean functions computed on the recurrent and layer-dependent architectures, we find that it is the same in both models.
arXiv Detail & Related papers (2020-04-19T18:31:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.