The universal approximation theorem for complex-valued neural networks
- URL: http://arxiv.org/abs/2012.03351v1
- Date: Sun, 6 Dec 2020 18:51:10 GMT
- Title: The universal approximation theorem for complex-valued neural networks
- Authors: Felix Voigtlaender
- Abstract summary: We generalize the classical universal approximation for neural networks to the case of complex-valued neural networks.
We consider feedforward networks with a complex activation function $sigma : mathbbC to mathbbC$ in which each neuron performs the operation $mathbbCN to mathbbC, z mapsto sigma(b + wT z)$ with weights $w in mathbbCN$ and a bias $b in math
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We generalize the classical universal approximation theorem for neural
networks to the case of complex-valued neural networks. Precisely, we consider
feedforward networks with a complex activation function $\sigma : \mathbb{C}
\to \mathbb{C}$ in which each neuron performs the operation $\mathbb{C}^N \to
\mathbb{C}, z \mapsto \sigma(b + w^T z)$ with weights $w \in \mathbb{C}^N$ and
a bias $b \in \mathbb{C}$, and with $\sigma$ applied componentwise. We
completely characterize those activation functions $\sigma$ for which the
associated complex networks have the universal approximation property, meaning
that they can uniformly approximate any continuous function on any compact
subset of $\mathbb{C}^d$ arbitrarily well.
Unlike the classical case of real networks, the set of "good activation
functions" which give rise to networks with the universal approximation
property differs significantly depending on whether one considers deep networks
or shallow networks: For deep networks with at least two hidden layers, the
universal approximation property holds as long as $\sigma$ is neither a
polynomial, a holomorphic function, or an antiholomorphic function. Shallow
networks, on the other hand, are universal if and only if the real part or the
imaginary part of $\sigma$ is not a polyharmonic function.
Related papers
- New advances in universal approximation with neural networks of minimal width [4.424170214926035]
We show that autoencoders with leaky ReLU activations are universal approximators of $Lp$ functions.
We broaden our results to show that smooth invertible neural networks can approximate $Lp(mathbbRd,mathbbRd)$ on compacta.
arXiv Detail & Related papers (2024-11-13T16:17:16Z) - Structure of universal formulas [13.794391803767617]
We introduce a hierarchy of classes connecting the global approximability property to the weaker property of infinite VC dimension.
We show that fixed-size neural networks with not more than one layer of neurons having activations cannot approximate functions on arbitrary finite sets.
We give examples of functional families, including two-hidden-layer neural networks, that approximate functions on arbitrary finite sets, but fail to do that on the whole domain of definition.
arXiv Detail & Related papers (2023-11-07T11:50:25Z) - Universal approximation with complex-valued deep narrow neural networks [0.0]
We study the universality of complex-valued neural networks with bounded widths and arbitrary depths.
We show that deep narrow complex-valued networks are universal if and only if their activation function is neither holomorphic, nor antiholomorphic, nor $mathbbR$-affine.
arXiv Detail & Related papers (2023-05-26T13:22:14Z) - Neural Network Approximation of Continuous Functions in High Dimensions
with Applications to Inverse Problems [6.84380898679299]
Current theory predicts that networks should scale exponentially in the dimension of the problem.
We provide a general method for bounding the complexity required for a neural network to approximate a H"older (or uniformly) continuous function.
arXiv Detail & Related papers (2022-08-28T22:44:07Z) - Deep neural network approximation of analytic functions [91.3755431537592]
entropy bound for the spaces of neural networks with piecewise linear activation functions.
We derive an oracle inequality for the expected error of the considered penalized deep neural network estimators.
arXiv Detail & Related papers (2021-04-05T18:02:04Z) - Quantitative approximation results for complex-valued neural networks [0.0]
We show that complex-valued neural networks with the modReLU activation function $sigma(z) = mathrmReLU(|z|) can uniformly approximate complex-valued functions of regularity $Cn$ on compact subsets of $mathbbCd$, giving explicit bounds on the approximation rate.
arXiv Detail & Related papers (2021-02-25T18:57:58Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - A deep network construction that adapts to intrinsic dimensionality
beyond the domain [79.23797234241471]
We study the approximation of two-layer compositions $f(x) = g(phi(x))$ via deep networks with ReLU activation.
We focus on two intuitive and practically relevant choices for $phi$: the projection onto a low-dimensional embedded submanifold and a distance to a collection of low-dimensional sets.
arXiv Detail & Related papers (2020-08-06T09:50:29Z) - Interval Universal Approximation for Neural Networks [47.767793120249095]
We introduce the interval universal approximation (IUA) theorem.
IUA shows that neural networks can approximate any continuous function $f$ as we have known for decades.
We study the computational complexity of constructing neural networks that are amenable to precise interval analysis.
arXiv Detail & Related papers (2020-07-12T20:43:56Z) - Coupling-based Invertible Neural Networks Are Universal Diffeomorphism
Approximators [72.62940905965267]
Invertible neural networks based on coupling flows (CF-INNs) have various machine learning applications such as image synthesis and representation learning.
Are CF-INNs universal approximators for invertible functions?
We prove a general theorem to show the equivalence of the universality for certain diffeomorphism classes.
arXiv Detail & Related papers (2020-06-20T02:07:37Z) - Minimum Width for Universal Approximation [91.02689252671291]
We prove that the minimum width required for the universal approximation of the $Lp$ functions is exactly $maxd_x+1,d_y$.
We also prove that the same conclusion does not hold for the uniform approximation with ReLU, but does hold with an additional threshold activation function.
arXiv Detail & Related papers (2020-06-16T01:24:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.