Deep neural network approximation of composite functions without the
curse of dimensionality
- URL: http://arxiv.org/abs/2304.05790v1
- Date: Wed, 12 Apr 2023 12:08:59 GMT
- Title: Deep neural network approximation of composite functions without the
curse of dimensionality
- Authors: Adrian Riekert
- Abstract summary: In this article we identify a class of high-dimensional continuous functions that can be approximated by deep neural networks (DNNs)
The functions in our class can be expressed as a potentially unbounded special functions which include products, maxima, and certain parallelized Lipschitz continuous functions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article we identify a general class of high-dimensional continuous
functions that can be approximated by deep neural networks (DNNs) with the
rectified linear unit (ReLU) activation without the curse of dimensionality. In
other words, the number of DNN parameters grows at most polynomially in the
input dimension and the approximation error. The functions in our class can be
expressed as a potentially unbounded number of compositions of special
functions which include products, maxima, and certain parallelized Lipschitz
continuous functions.
Related papers
- Universal approximation results for neural networks with non-polynomial activation function over non-compact domains [3.3379026542599934]
We derive universal approximation results for neural networks within function spaces over non-compact subsets of a Euclidean space.
We provide some dimension-independent rates for approximating a function with sufficiently regular and integrable Fourier transform by neural networks with non-polynomial activation function.
arXiv Detail & Related papers (2024-10-18T09:53:20Z) - Approximation of Nonlinear Functionals Using Deep ReLU Networks [7.876115370275732]
We investigate the approximation power of functional deep neural networks associated with the rectified linear unit (ReLU) activation function.
In addition, we establish rates of approximation of the proposed functional deep ReLU networks under mild regularity conditions.
arXiv Detail & Related papers (2023-04-10T08:10:11Z) - Neural Set Function Extensions: Learning with Discrete Functions in High
Dimensions [63.21838830509772]
We develop a framework for extending set functions onto low-dimensional continuous domains.
Our framework subsumes many well-known extensions as special cases.
We convert low-dimensional neural network bottlenecks into representations in high-dimensional spaces.
arXiv Detail & Related papers (2022-08-08T10:58:02Z) - Exploring Linear Feature Disentanglement For Neural Networks [63.20827189693117]
Non-linear activation functions, e.g., Sigmoid, ReLU, and Tanh, have achieved great success in neural networks (NNs)
Due to the complex non-linear characteristic of samples, the objective of those activation functions is to project samples from their original feature space to a linear separable feature space.
This phenomenon ignites our interest in exploring whether all features need to be transformed by all non-linear functions in current typical NNs.
arXiv Detail & Related papers (2022-03-22T13:09:17Z) - Neural Network Approximation of Refinable Functions [8.323468006516018]
We show that refinable functions are approximated by the outputs of deep ReLU networks with a fixed width and increasing depth with accuracy exponential.
Our results apply to functions used in the standard construction of wavelets as well as to functions constructed via subdivision algorithms in Computer Aided Geometric Design.
arXiv Detail & Related papers (2021-07-28T06:45:36Z) - Going Beyond Linear RL: Sample Efficient Neural Function Approximation [76.57464214864756]
We study function approximation with two-layer neural networks.
Our results significantly improve upon what can be attained with linear (or eluder dimension) methods.
arXiv Detail & Related papers (2021-07-14T03:03:56Z) - The Representation Power of Neural Networks: Breaking the Curse of
Dimensionality [0.0]
We prove upper bounds on quantities for shallow and deep neural networks.
We further prove that these bounds nearly match the minimal number of parameters any continuous function approximator needs to approximate Korobov functions.
arXiv Detail & Related papers (2020-12-10T04:44:07Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - A Functional Perspective on Learning Symmetric Functions with Neural
Networks [48.80300074254758]
We study the learning and representation of neural networks defined on measures.
We establish approximation and generalization bounds under different choices of regularization.
The resulting models can be learned efficiently and enjoy generalization guarantees that extend across input sizes.
arXiv Detail & Related papers (2020-08-16T16:34:33Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - On Sharpness of Error Bounds for Multivariate Neural Network
Approximation [0.0]
The paper deals with best non-linear approximation by such sums of ridge functions.
Error bounds are presented in terms of moduli of smoothness.
arXiv Detail & Related papers (2020-04-05T14:00:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.