On the universal approximation property of radial basis function neural
networks
- URL: http://arxiv.org/abs/2304.02220v2
- Date: Sat, 8 Apr 2023 13:54:20 GMT
- Title: On the universal approximation property of radial basis function neural
networks
- Authors: Aysu Ismayilova and Muhammad Ismayilov
- Abstract summary: We consider a new class of RBF (Radial Basis Function) neural networks, in which smoothing factors are replaced with shifts.
We prove under certain conditions that these networks are capable of approximating any continuous multivariate function on any compact subset of the $d$-dimensional Euclidean space.
For RBF networks with finitely many fixed centroids we describe conditions guaranteeing approximation with arbitrary precision.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we consider a new class of RBF (Radial Basis Function) neural
networks, in which smoothing factors are replaced with shifts. We prove under
certain conditions on the activation function that these networks are capable
of approximating any continuous multivariate function on any compact subset of
the $d$-dimensional Euclidean space. For RBF networks with finitely many fixed
centroids we describe conditions guaranteeing approximation with arbitrary
precision.
Related papers
- Approximation Error and Complexity Bounds for ReLU Networks on Low-Regular Function Spaces [0.0]
We consider the approximation of a large class of bounded functions, with minimal regularity assumptions, by ReLU neural networks.
We show that the approximation error can be bounded from above by a quantity proportional to the uniform norm of the target function.
arXiv Detail & Related papers (2024-05-10T14:31:58Z) - NeuRBF: A Neural Fields Representation with Adaptive Radial Basis
Functions [93.02515761070201]
We present a novel type of neural fields that uses general radial bases for signal representation.
Our method builds upon general radial bases with flexible kernel position and shape, which have higher spatial adaptivity and can more closely fit target signals.
When applied to neural radiance field reconstruction, our method achieves state-of-the-art rendering quality, with small model size and comparable training speed.
arXiv Detail & Related papers (2023-09-27T06:32:05Z) - Piecewise Linear Functions Representable with Infinite Width Shallow
ReLU Neural Networks [0.0]
We prove a conjecture of Ongie et al. that every continuous piecewise linear function expressible with this kind of infinite width neural network is expressible as a finite width shallow ReLU neural network.
arXiv Detail & Related papers (2023-07-25T15:38:18Z) - Approximation and interpolation of deep neural networks [0.0]
In the overparametrized regime, deep neural network provide universal approximations and can interpolate any data set.
In the last section, we provide a practical probabilistic method of finding such a point under general conditions on the activation function.
arXiv Detail & Related papers (2023-04-20T08:45:16Z) - Optimal Approximation Complexity of High-Dimensional Functions with
Neural Networks [3.222802562733787]
We investigate properties of neural networks that use both ReLU and $x2$ as activation functions.
We show how to leverage low local dimensionality in some contexts to overcome the curse of dimensionality, obtaining approximation rates that are optimal for unknown lower-dimensional subspaces.
arXiv Detail & Related papers (2023-01-30T17:29:19Z) - Benefits of Overparameterized Convolutional Residual Networks: Function
Approximation under Smoothness Constraint [48.25573695787407]
We prove that large ConvResNets can not only approximate a target function in terms of function value, but also exhibit sufficient first-order smoothness.
Our theory partially justifies the benefits of using deep and wide networks in practice.
arXiv Detail & Related papers (2022-06-09T15:35:22Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Deep neural network approximation of analytic functions [91.3755431537592]
entropy bound for the spaces of neural networks with piecewise linear activation functions.
We derive an oracle inequality for the expected error of the considered penalized deep neural network estimators.
arXiv Detail & Related papers (2021-04-05T18:02:04Z) - Approximations with deep neural networks in Sobolev time-space [5.863264019032882]
Solution of evolution equation generally lies in certain Bochner-Sobolev spaces.
Deep neural networks can approximate Sobolev-regular functions with respect to Bochner-Sobolev spaces.
arXiv Detail & Related papers (2020-12-23T22:21:05Z) - Optimal Rates for Averaged Stochastic Gradient Descent under Neural
Tangent Kernel Regime [50.510421854168065]
We show that the averaged gradient descent can achieve the minimax optimal convergence rate.
We show that the target function specified by the NTK of a ReLU network can be learned at the optimal convergence rate.
arXiv Detail & Related papers (2020-06-22T14:31:37Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.