Space-Time Approximation with Shallow Neural Networks in Fourier
Lebesgue spaces
- URL: http://arxiv.org/abs/2312.08461v1
- Date: Wed, 13 Dec 2023 19:02:27 GMT
- Title: Space-Time Approximation with Shallow Neural Networks in Fourier
Lebesgue spaces
- Authors: Ahmed Abdeljawad, Thomas Dittrich
- Abstract summary: We study the inclusion of anisotropic weighted Fourier-Lebesgue spaces in the Bochner-Sobolev spaces.
We establish a bound on the approximation rate for functions from the anisotropic weighted Fourier-Lebesgue spaces and approximation via SNNs in the Bochner-Sobolev norm.
- Score: 1.74048653626208
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Approximation capabilities of shallow neural networks (SNNs) form an integral
part in understanding the properties of deep neural networks (DNNs). In the
study of these approximation capabilities some very popular classes of target
functions are the so-called spectral Barron spaces. This spaces are of special
interest when it comes to the approximation of partial differential equation
(PDE) solutions. It has been shown that the solution of certain static PDEs
will lie in some spectral Barron space. In order to alleviate the limitation to
static PDEs and include a time-domain that might have a different regularity
than the space domain, we extend the notion of spectral Barron spaces to
anisotropic weighted Fourier-Lebesgue spaces. In doing so, we consider target
functions that have two blocks of variables, among which each block is allowed
to have different decay and integrability properties. For these target
functions we first study the inclusion of anisotropic weighted Fourier-Lebesgue
spaces in the Bochner-Sobolev spaces. With that we can now also measure the
approximation error in terms of an anisotropic Sobolev norm, namely the
Bochner-Sobolev norm. We use this observation in a second step where we
establish a bound on the approximation rate for functions from the anisotropic
weighted Fourier-Lebesgue spaces and approximation via SNNs in the
Bochner-Sobolev norm.
Related papers
- A Gap Between the Gaussian RKHS and Neural Networks: An Infinite-Center Asymptotic Analysis [18.454085925930073]
We show that certain functions that lie in the Gaussian RKHS have infinite norm in the neural network Banach space.
This provides a nontrivial gap between kernel methods and neural networks.
arXiv Detail & Related papers (2025-02-22T19:33:19Z) - Approximation Rates in Fréchet Metrics: Barron Spaces, Paley-Wiener Spaces, and Fourier Multipliers [1.4732811715354452]
We study some general approximation capabilities for linear differential operators by approximating the corresponding symbol in the Fourier domain.
In that sense, we measure the approximation error in terms of a Fr'echet metric.
We then focus on a natural extension of our main theorem, in which we manage to reduce the assumptions on the sequence of semi-norms.
arXiv Detail & Related papers (2024-12-27T20:16:04Z) - Weighted Sobolev Approximation Rates for Neural Networks on Unbounded Domains [1.4732811715354452]
We consider the approximation capabilities of shallow neural networks in weighted Sobolev spaces for functions in the spectral Barron space.
We first present embedding results for the more general weighted Fourier-Lebesgue spaces in the weighted Sobolev spaces and then we establish approximation rates for shallow neural networks that come without curse of dimensionality.
arXiv Detail & Related papers (2024-11-06T18:36:21Z) - Learning the boundary-to-domain mapping using Lifting Product Fourier Neural Operators for partial differential equations [5.5927988408828755]
We present a novel FNO-based architecture, named Lifting Product FNO (or LP-FNO) which can map arbitrary boundary functions to a solution in the entire domain.
We demonstrate the efficacy and resolution independence of the proposed LP-FNO for the 2D Poisson equation.
arXiv Detail & Related papers (2024-06-24T15:45:37Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Correspondence between open bosonic systems and stochastic differential
equations [77.34726150561087]
We show that there can also be an exact correspondence at finite $n$ when the bosonic system is generalized to include interactions with the environment.
A particular system with the form of a discrete nonlinear Schr"odinger equation is analyzed in more detail.
arXiv Detail & Related papers (2023-02-03T19:17:37Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Continuous percolation in a Hilbert space for a large system of qubits [58.720142291102135]
The percolation transition is defined through the appearance of the infinite cluster.
We show that the exponentially increasing dimensionality of the Hilbert space makes its covering by finite-size hyperspheres inefficient.
Our approach to the percolation transition in compact metric spaces may prove useful for its rigorous treatment in other contexts.
arXiv Detail & Related papers (2022-10-15T13:53:21Z) - Unified Fourier-based Kernel and Nonlinearity Design for Equivariant
Networks on Homogeneous Spaces [52.424621227687894]
We introduce a unified framework for group equivariant networks on homogeneous spaces.
We take advantage of the sparsity of Fourier coefficients of the lifted feature fields.
We show that other methods treating features as the Fourier coefficients in the stabilizer subgroup are special cases of our activation.
arXiv Detail & Related papers (2022-06-16T17:59:01Z) - Exponential Convergence of Deep Operator Networks for Elliptic Partial
Differential Equations [0.0]
We construct deep operator networks (ONets) between infinite-dimensional spaces that emulate with an exponential rate of convergence the coefficient-to-solution map of elliptic second-order PDEs.
In particular, we consider problems set in $d$-dimensional periodic domains, $d=1, 2, dots$, and with analytic right-hand sides and coefficients.
We prove that the neural networks in the ONet have size $mathcalO(left|log(varepsilon)right|kappa)$ for some $kappa
arXiv Detail & Related papers (2021-12-15T13:56:28Z) - Sobolev-type embeddings for neural network approximation spaces [5.863264019032882]
We consider neural network approximation spaces that classify functions according to the rate at which they can be approximated.
We prove embedding theorems between these spaces for different values of $p$.
We find that, analogous to the case of classical function spaces, it is possible to trade "smoothness" (i.e., approximation rate) for increased integrability.
arXiv Detail & Related papers (2021-10-28T17:11:38Z) - Approximations with deep neural networks in Sobolev time-space [5.863264019032882]
Solution of evolution equation generally lies in certain Bochner-Sobolev spaces.
Deep neural networks can approximate Sobolev-regular functions with respect to Bochner-Sobolev spaces.
arXiv Detail & Related papers (2020-12-23T22:21:05Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.