Weighted Sobolev Approximation Rates for Neural Networks on Unbounded Domains
- URL: http://arxiv.org/abs/2411.04108v1
- Date: Wed, 06 Nov 2024 18:36:21 GMT
- Title: Weighted Sobolev Approximation Rates for Neural Networks on Unbounded Domains
- Authors: Ahmed Abdeljawad, Thomas Dittrich,
- Abstract summary: We consider the approximation capabilities of shallow neural networks in weighted Sobolev spaces for functions in the spectral Barron space.
We first present embedding results for the more general weighted Fourier-Lebesgue spaces in the weighted Sobolev spaces and then we establish approximation rates for shallow neural networks that come without curse of dimensionality.
- Score: 1.4732811715354452
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we consider the approximation capabilities of shallow neural networks in weighted Sobolev spaces for functions in the spectral Barron space. The existing literature already covers several cases, in which the spectral Barron space can be approximated well, i.e., without curse of dimensionality, by shallow networks and several different classes of activation function. The limitations of the existing results are mostly on the error measures that were considered, in which the results are restricted to Sobolev spaces over a bounded domain. We will here treat two cases that extend upon the existing results. Namely, we treat the case with bounded domain and Muckenhoupt weights and the case, where the domain is allowed to be unbounded and the weights are required to decay. We first present embedding results for the more general weighted Fourier-Lebesgue spaces in the weighted Sobolev spaces and then we establish asymptotic approximation rates for shallow neural networks that come without curse of dimensionality.
Related papers
- A Gap Between the Gaussian RKHS and Neural Networks: An Infinite-Center Asymptotic Analysis [18.454085925930073]
We show that certain functions that lie in the Gaussian RKHS have infinite norm in the neural network Banach space.
This provides a nontrivial gap between kernel methods and neural networks.
arXiv Detail & Related papers (2025-02-22T19:33:19Z) - Dimension-independent learning rates for high-dimensional classification
problems [53.622581586464634]
We show that every $RBV2$ function can be approximated by a neural network with bounded weights.
We then prove the existence of a neural network with bounded weights approximating a classification function.
arXiv Detail & Related papers (2024-09-26T16:02:13Z) - Numerical Approximation Capacity of Neural Networks with Bounded Parameters: Do Limits Exist, and How Can They Be Measured? [4.878983382452911]
We show that while universal approximation is theoretically feasible, in practical numerical scenarios, Deep Neural Networks (DNNs) can only be approximated by a finite-dimensional vector space.
We introduce the concepts of textit$epsilon$ outer measure and textitNumerical Span Dimension (NSdim) to quantify the approximation capacity limit of a family of networks.
arXiv Detail & Related papers (2024-09-25T07:43:48Z) - Space-Time Approximation with Shallow Neural Networks in Fourier
Lebesgue spaces [1.74048653626208]
We study the inclusion of anisotropic weighted Fourier-Lebesgue spaces in the Bochner-Sobolev spaces.
We establish a bound on the approximation rate for functions from the anisotropic weighted Fourier-Lebesgue spaces and approximation via SNNs in the Bochner-Sobolev norm.
arXiv Detail & Related papers (2023-12-13T19:02:27Z) - Continuous percolation in a Hilbert space for a large system of qubits [58.720142291102135]
The percolation transition is defined through the appearance of the infinite cluster.
We show that the exponentially increasing dimensionality of the Hilbert space makes its covering by finite-size hyperspheres inefficient.
Our approach to the percolation transition in compact metric spaces may prove useful for its rigorous treatment in other contexts.
arXiv Detail & Related papers (2022-10-15T13:53:21Z) - On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks [91.3755431537592]
We study how random pruning of the weights affects a neural network's neural kernel (NTK)
In particular, this work establishes an equivalence of the NTKs between a fully-connected neural network and its randomly pruned version.
arXiv Detail & Related papers (2022-03-27T15:22:19Z) - Sobolev-type embeddings for neural network approximation spaces [5.863264019032882]
We consider neural network approximation spaces that classify functions according to the rate at which they can be approximated.
We prove embedding theorems between these spaces for different values of $p$.
We find that, analogous to the case of classical function spaces, it is possible to trade "smoothness" (i.e., approximation rate) for increased integrability.
arXiv Detail & Related papers (2021-10-28T17:11:38Z) - Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks [19.216784367141972]
We study the problem of estimating an unknown function from noisy data using shallow (single-hidden layer) ReLU neural networks.
We quantify the performance of these neural network estimators when the data-generating function belongs to the space of functions of second-order bounded variation in the Radon domain.
arXiv Detail & Related papers (2021-09-18T05:56:06Z) - Pure Exploration in Kernel and Neural Bandits [90.23165420559664]
We study pure exploration in bandits, where the dimension of the feature representation can be much larger than the number of arms.
To overcome the curse of dimensionality, we propose to adaptively embed the feature representation of each arm into a lower-dimensional space.
arXiv Detail & Related papers (2021-06-22T19:51:59Z) - Two-layer neural networks with values in a Banach space [1.90365714903665]
We study two-layer neural networks whose domain and range are Banach spaces with separable preduals.
As the nonlinearity we choose the lattice operation of taking the positive part; in case of $mathbb Rd$-valued neural networks this corresponds to the ReLU activation function.
arXiv Detail & Related papers (2021-05-05T14:54:24Z) - The role of boundary conditions in quantum computations of scattering
observables [58.720142291102135]
Quantum computing may offer the opportunity to simulate strongly-interacting field theories, such as quantum chromodynamics, with physical time evolution.
As with present-day calculations, quantum computation strategies still require the restriction to a finite system size.
We quantify the volume effects for various $1+1$D Minkowski-signature quantities and show that these can be a significant source of systematic uncertainty.
arXiv Detail & Related papers (2020-07-01T17:43:11Z) - Exact posterior distributions of wide Bayesian neural networks [51.20413322972014]
We show that the exact BNN posterior converges (weakly) to the one induced by the GP limit of the prior.
For empirical validation, we show how to generate exact samples from a finite BNN on a small dataset via rejection sampling.
arXiv Detail & Related papers (2020-06-18T13:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.