On Sharpness of Error Bounds for Multivariate Neural Network
Approximation
- URL: http://arxiv.org/abs/2004.02203v3
- Date: Mon, 23 Nov 2020 15:28:36 GMT
- Title: On Sharpness of Error Bounds for Multivariate Neural Network
Approximation
- Authors: Steffen Goebbels
- Abstract summary: The paper deals with best non-linear approximation by such sums of ridge functions.
Error bounds are presented in terms of moduli of smoothness.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single hidden layer feedforward neural networks can represent multivariate
functions that are sums of ridge functions. These ridge functions are defined
via an activation function and customizable weights. The paper deals with best
non-linear approximation by such sums of ridge functions. Error bounds are
presented in terms of moduli of smoothness. The main focus, however, is to
prove that the bounds are best possible. To this end, counterexamples are
constructed with a non-linear, quantitative extension of the uniform
boundedness principle. They show sharpness with respect to Lipschitz classes
for the logistic activation function and for certain piecewise polynomial
activation functions. The paper is based on univariate results in (Goebbels,
St.: On sharpness of error bounds for univariate approximation by single hidden
layer feedforward neural networks. Results Math 75 (3), 2020, article 109,
https://rdcu.be/b5mKH).
Related papers
- Novel Quadratic Constraints for Extending LipSDP beyond Slope-Restricted
Activations [52.031701581294804]
Lipschitz bounds for neural networks can be computed with upper time preservation guarantees.
Our paper bridges the gap and extends Lipschitz beyond slope-restricted activation functions.
Our proposed analysis is general and provides a unified approach for estimating $ell$ and $ell_infty$ Lipschitz bounds.
arXiv Detail & Related papers (2024-01-25T09:23:31Z) - Efficient uniform approximation using Random Vector Functional Link
networks [0.0]
A Random Vector Functional Link (RVFL) network is a depth-2 neural network with random inner nodes and biases.
We show that an RVFL with ReLU activation can approximate the Lipschitz target function.
Our method of proof is rooted in theory and harmonic analysis.
arXiv Detail & Related papers (2023-06-30T09:25:03Z) - The Implicit Bias of Minima Stability in Multivariate Shallow ReLU
Networks [53.95175206863992]
We study the type of solutions to which gradient descent converges when used to train a single hidden-layer multivariate ReLU network with the quadratic loss.
We prove that although shallow ReLU networks are universal approximators, stable shallow networks are not.
arXiv Detail & Related papers (2023-06-30T09:17:39Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Deep neural network approximation of composite functions without the
curse of dimensionality [0.0]
In this article we identify a class of high-dimensional continuous functions that can be approximated by deep neural networks (DNNs)
The functions in our class can be expressed as a potentially unbounded special functions which include products, maxima, and certain parallelized Lipschitz continuous functions.
arXiv Detail & Related papers (2023-04-12T12:08:59Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Constrained Monotonic Neural Networks [0.685316573653194]
Wider adoption of neural networks in many critical domains such as finance and healthcare is being hindered by the need to explain their predictions.
Monotonicity constraint is one of the most requested properties in real-world scenarios.
We show it can approximate any continuous monotone function on a compact subset of $mathbbRn$.
arXiv Detail & Related papers (2022-05-24T04:26:10Z) - Approximation of Lipschitz Functions using Deep Spline Neural Networks [21.13606355641886]
We propose to use learnable spline activation functions with at least 3 linear regions instead of ReLU networks.
We prove that this choice is optimal among all component-wise $1$-Lipschitz activation functions.
This choice is at least as expressive as the recently introduced non component-wise Groupsort activation function for spectral-norm-constrained weights.
arXiv Detail & Related papers (2022-04-13T08:07:28Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - Analytical bounds on the local Lipschitz constants of affine-ReLU
functions [0.0]
We mathematically determine upper bounds on the local Lipschitz constant of an affine-ReLU function.
We show how these bounds can be combined to determine a bound on an entire network.
We show several examples by applying our results to AlexNet, as well as several smaller networks based on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2020-08-14T00:23:21Z) - Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions [84.49087114959872]
We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonsmooth functions.
In particular, we study Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions.
arXiv Detail & Related papers (2020-02-10T23:23:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.