Interval Universal Approximation for Neural Networks
- URL: http://arxiv.org/abs/2007.06093v5
- Date: Wed, 14 Jul 2021 05:51:30 GMT
- Title: Interval Universal Approximation for Neural Networks
- Authors: Zi Wang, Aws Albarghouthi, Gautam Prakriya, Somesh Jha
- Abstract summary: We introduce the interval universal approximation (IUA) theorem.
IUA shows that neural networks can approximate any continuous function $f$ as we have known for decades.
We study the computational complexity of constructing neural networks that are amenable to precise interval analysis.
- Score: 47.767793120249095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To verify safety and robustness of neural networks, researchers have
successfully applied abstract interpretation, primarily using the interval
abstract domain. In this paper, we study the theoretical power and limits of
the interval domain for neural-network verification.
First, we introduce the interval universal approximation (IUA) theorem. IUA
shows that neural networks not only can approximate any continuous function $f$
(universal approximation) as we have known for decades, but we can find a
neural network, using any well-behaved activation function, whose interval
bounds are an arbitrarily close approximation of the set semantics of $f$ (the
result of applying $f$ to a set of inputs). We call this notion of
approximation interval approximation. Our theorem generalizes the recent result
of Baader et al. (2020) from ReLUs to a rich class of activation functions that
we call squashable functions. Additionally, the IUA theorem implies that we can
always construct provably robust neural networks under $\ell_\infty$-norm using
almost any practical activation function.
Second, we study the computational complexity of constructing neural networks
that are amenable to precise interval analysis. This is a crucial question, as
our constructive proof of IUA is exponential in the size of the approximation
domain. We boil this question down to the problem of approximating the range of
a neural network with squashable activation functions. We show that the range
approximation problem (RA) is a $\Delta_2$-intermediate problem, which is
strictly harder than $\mathsf{NP}$-complete problems, assuming
$\mathsf{coNP}\not\subset \mathsf{NP}$. As a result, IUA is an inherently hard
problem: No matter what abstract domain or computational tools we consider to
achieve interval approximation, there is no efficient construction of such a
universal approximator.
Related papers
- Optimal Neural Network Approximation for High-Dimensional Continuous Functions [5.748690310135373]
We present a family of continuous functions that requires at least width $d$, and therefore at least $d$ intrinsic neurons, to achieve arbitrary accuracy in its approximation.
This shows that the requirement of $mathcalO(d)$ intrinsic neurons is optimal in the sense that it grows linearly with the input dimension $d$.
arXiv Detail & Related papers (2024-09-04T01:18:55Z) - Optimal approximation using complex-valued neural networks [0.0]
Complex-valued neural networks (CVNNs) have recently shown promising empirical success.
We analyze the expressivity of CVNNs by studying their approximation properties.
arXiv Detail & Related papers (2023-03-29T15:56:43Z) - Provably Efficient Reinforcement Learning via Surprise Bound [66.15308700413814]
We propose a provably efficient reinforcement learning algorithm (both computationally and statistically) with general value function approximations.
Our algorithm achieves reasonable regret bounds when applied to both the linear setting and the sparse high-dimensional linear setting.
arXiv Detail & Related papers (2023-02-22T20:21:25Z) - Neural Network Approximation of Continuous Functions in High Dimensions
with Applications to Inverse Problems [6.84380898679299]
Current theory predicts that networks should scale exponentially in the dimension of the problem.
We provide a general method for bounding the complexity required for a neural network to approximate a H"older (or uniformly) continuous function.
arXiv Detail & Related papers (2022-08-28T22:44:07Z) - Deep neural network approximation of analytic functions [91.3755431537592]
entropy bound for the spaces of neural networks with piecewise linear activation functions.
We derive an oracle inequality for the expected error of the considered penalized deep neural network estimators.
arXiv Detail & Related papers (2021-04-05T18:02:04Z) - Size and Depth Separation in Approximating Natural Functions with Neural
Networks [52.73592689730044]
We show the benefits of size and depth for approximation of natural functions with ReLU networks.
We show a complexity-theoretic barrier to proving such results beyond size $O(d)$.
We also show an explicit natural function, that can be approximated with networks of size $O(d)$.
arXiv Detail & Related papers (2021-01-30T21:30:11Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Random Vector Functional Link Networks for Function Approximation on Manifolds [8.535815777849786]
We show that single layer neural-networks with random input-to-hidden layer weights and biases have seen success in practice.
We further adapt this randomized neural network architecture to approximate functions on smooth, compact submanifolds of Euclidean space.
arXiv Detail & Related papers (2020-07-30T23:50:44Z) - Nonclosedness of Sets of Neural Networks in Sobolev Spaces [0.0]
We show that realized neural networks are not closed in order-$(m-1)$ Sobolev spaces $Wm-1,p$ for $p in [1,infty]$.
For a real analytic activation function, we show that sets of realized neural networks are not closed in $Wk,p$ for any $k in mathbbN$.
arXiv Detail & Related papers (2020-07-23T00:57:25Z) - Reinforcement Learning with General Value Function Approximation:
Provably Efficient Approach via Bounded Eluder Dimension [124.7752517531109]
We establish a provably efficient reinforcement learning algorithm with general value function approximation.
We show that our algorithm achieves a regret bound of $widetildeO(mathrmpoly(dH)sqrtT)$ where $d$ is a complexity measure.
Our theory generalizes recent progress on RL with linear value function approximation and does not make explicit assumptions on the model of the environment.
arXiv Detail & Related papers (2020-05-21T17:36:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.