Neural Network Approximation of Refinable Functions
- URL: http://arxiv.org/abs/2107.13191v1
- Date: Wed, 28 Jul 2021 06:45:36 GMT
- Title: Neural Network Approximation of Refinable Functions
- Authors: Ingrid Daubechies, Ronald DeVore, Nadav Dym, Shira Faigenbaum-Golovin,
Shahar Z. Kovalsky, Kung-Ching Lin, Josiah Park, Guergana Petrova, Barak
Sober
- Abstract summary: We show that refinable functions are approximated by the outputs of deep ReLU networks with a fixed width and increasing depth with accuracy exponential.
Our results apply to functions used in the standard construction of wavelets as well as to functions constructed via subdivision algorithms in Computer Aided Geometric Design.
- Score: 8.323468006516018
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the desire to quantify the success of neural networks in deep learning and
other applications, there is a great interest in understanding which functions
are efficiently approximated by the outputs of neural networks. By now, there
exists a variety of results which show that a wide range of functions can be
approximated with sometimes surprising accuracy by these outputs. For example,
it is known that the set of functions that can be approximated with exponential
accuracy (in terms of the number of parameters used) includes, on one hand,
very smooth functions such as polynomials and analytic functions (see e.g.
\cite{E,S,Y}) and, on the other hand, very rough functions such as the
Weierstrass function (see e.g. \cite{EPGB,DDFHP}), which is nowhere
differentiable. In this paper, we add to the latter class of rough functions by
showing that it also includes refinable functions. Namely, we show that
refinable functions are approximated by the outputs of deep ReLU networks with
a fixed width and increasing depth with accuracy exponential in terms of their
number of parameters. Our results apply to functions used in the standard
construction of wavelets as well as to functions constructed via subdivision
algorithms in Computer Aided Geometric Design.
Related papers
- Spherical Analysis of Learning Nonlinear Functionals [10.785977740158193]
In this paper, we consider functionals defined on sets of functions on spheres.
The approximation ability of deep ReLU neural networks is investigated using an encoder-decoder framework.
arXiv Detail & Related papers (2024-10-01T20:10:00Z) - Deep Neural Networks are Adaptive to Function Regularity and Data Distribution in Approximation and Estimation [8.284464143581546]
We study how deep neural networks can adapt to different regularity in functions across different locations and scales.
Our results show that deep neural networks are adaptive to different regularity of functions and nonuniform data distributions.
arXiv Detail & Related papers (2024-06-08T02:01:50Z) - Approximation of RKHS Functionals by Neural Networks [30.42446856477086]
We study the approximation of functionals on kernel reproducing Hilbert spaces (RKHS's) using neural networks.
We derive explicit error bounds for those induced by inverse multiquadric, Gaussian, and Sobolev kernels.
We apply our findings to functional regression, proving that neural networks can accurately approximate the regression maps.
arXiv Detail & Related papers (2024-03-18T18:58:23Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Benefits of Overparameterized Convolutional Residual Networks: Function
Approximation under Smoothness Constraint [48.25573695787407]
We prove that large ConvResNets can not only approximate a target function in terms of function value, but also exhibit sufficient first-order smoothness.
Our theory partially justifies the benefits of using deep and wide networks in practice.
arXiv Detail & Related papers (2022-06-09T15:35:22Z) - Size and Depth Separation in Approximating Natural Functions with Neural
Networks [52.73592689730044]
We show the benefits of size and depth for approximation of natural functions with ReLU networks.
We show a complexity-theoretic barrier to proving such results beyond size $O(d)$.
We also show an explicit natural function, that can be approximated with networks of size $O(d)$.
arXiv Detail & Related papers (2021-01-30T21:30:11Z) - A Functional Perspective on Learning Symmetric Functions with Neural
Networks [48.80300074254758]
We study the learning and representation of neural networks defined on measures.
We establish approximation and generalization bounds under different choices of regularization.
The resulting models can be learned efficiently and enjoy generalization guarantees that extend across input sizes.
arXiv Detail & Related papers (2020-08-16T16:34:33Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z) - PDE constraints on smooth hierarchical functions computed by neural
networks [0.0]
An important problem in the theory of deep neural networks is expressivity.
We study real infinitely differentiable (smooth) hierarchical functions implemented by feedforward neural networks.
We conjecture that such PDE constraints, once accompanied by appropriate non-singularity conditions, guarantee that the smooth function under consideration can be represented by the network.
arXiv Detail & Related papers (2020-05-18T16:34:11Z) - Space of Functions Computed by Deep-Layered Machines [74.13735716675987]
We study the space of functions computed by random-layered machines, including deep neural networks and Boolean circuits.
Investigating the distribution of Boolean functions computed on the recurrent and layer-dependent architectures, we find that it is the same in both models.
arXiv Detail & Related papers (2020-04-19T18:31:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.