Analytical bounds on the local Lipschitz constants of ReLU networks
- URL: http://arxiv.org/abs/2104.14672v1
- Date: Thu, 29 Apr 2021 21:57:47 GMT
- Title: Analytical bounds on the local Lipschitz constants of ReLU networks
- Authors: Trevor Avant and Kristi A. Morgansen
- Abstract summary: We do so by deriving Lipschitz constants and bounds for ReLU, affine-ReLU, and max pooling functions.
Our method produces the largest known bounds on minimum adversarial perturbations for large networks such as AlexNet and VGG-16.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we determine analytical upper bounds on the local Lipschitz
constants of feedforward neural networks with ReLU activation functions. We do
so by deriving Lipschitz constants and bounds for ReLU, affine-ReLU, and max
pooling functions, and combining the results to determine a network-wide bound.
Our method uses several insights to obtain tight bounds, such as keeping track
of the zero elements of each layer, and analyzing the composition of affine and
ReLU functions. Furthermore, we employ a careful computational approach which
allows us to apply our method to large networks such as AlexNet and VGG-16. We
present several examples using different networks, which show how our local
Lipschitz bounds are tighter than the global Lipschitz bounds. We also show how
our method can be applied to provide adversarial bounds for classification
networks. These results show that our method produces the largest known bounds
on minimum adversarial perturbations for large networks such as AlexNet and
VGG-16.
Related papers
- Three Quantization Regimes for ReLU Networks [3.823356975862005]
We establish the fundamental limits in the approximation of Lipschitz functions by deep ReLU neural networks with finite-precision weights.
In the proper-quantization regime, neural networks exhibit memory-optimality in the approximation of Lipschitz functions.
arXiv Detail & Related papers (2024-05-03T09:27:31Z) - Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram
Iteration [122.51142131506639]
We introduce a precise, fast, and differentiable upper bound for the spectral norm of convolutional layers using circulant matrix theory.
We show through a comprehensive set of experiments that our approach outperforms other state-of-the-art methods in terms of precision, computational cost, and scalability.
It proves highly effective for the Lipschitz regularization of convolutional neural networks, with competitive results against concurrent approaches.
arXiv Detail & Related papers (2023-05-25T15:32:21Z) - Efficiently Computing Local Lipschitz Constants of Neural Networks via
Bound Propagation [79.13041340708395]
Lipschitz constants are connected to many properties of neural networks, such as robustness, fairness, and generalization.
Existing methods for computing Lipschitz constants either produce relatively loose upper bounds or are limited to small networks.
We develop an efficient framework for computing the $ell_infty$ local Lipschitz constant of a neural network by tightly upper bounding the norm of Clarke Jacobian.
arXiv Detail & Related papers (2022-10-13T22:23:22Z) - Approximation speed of quantized vs. unquantized ReLU neural networks
and beyond [0.0]
We consider general approximation families encompassing ReLU neural networks.
We use $infty$-encodability to guarantee that ReLU networks can be uniformly quantized.
We also prove that ReLU networks share a common limitation with many other approximation families.
arXiv Detail & Related papers (2022-05-24T07:48:12Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - Analytical bounds on the local Lipschitz constants of affine-ReLU
functions [0.0]
We mathematically determine upper bounds on the local Lipschitz constant of an affine-ReLU function.
We show how these bounds can be combined to determine a bound on an entire network.
We show several examples by applying our results to AlexNet, as well as several smaller networks based on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2020-08-14T00:23:21Z) - On Lipschitz Regularization of Convolutional Layers using Toeplitz
Matrix Theory [77.18089185140767]
Lipschitz regularity is established as a key property of modern deep learning.
computing the exact value of the Lipschitz constant of a neural network is known to be NP-hard.
We introduce a new upper bound for convolutional layers that is both tight and easy to compute.
arXiv Detail & Related papers (2020-06-15T13:23:34Z) - Approximating Lipschitz continuous functions with GroupSort neural
networks [3.416170716497814]
Recent advances in adversarial attacks and Wasserstein GANs have advocated for use of neural networks with restricted Lipschitz constants.
We show in particular how these networks can represent any Lipschitz continuous piecewise linear functions.
We also prove that they are well-suited for approximating Lipschitz continuous functions and exhibit upper bounds on both the depth and size.
arXiv Detail & Related papers (2020-06-09T13:37:43Z) - Lipschitz constant estimation of Neural Networks via sparse polynomial
optimization [47.596834444042685]
LiPopt is a framework for computing increasingly tighter upper bounds on the Lipschitz constant of neural networks.
We show how to use the sparse connectivity of a network, to significantly reduce the complexity.
We conduct experiments on networks with random weights as well as networks trained on MNIST.
arXiv Detail & Related papers (2020-04-18T18:55:02Z) - Exactly Computing the Local Lipschitz Constant of ReLU Networks [98.43114280459271]
The local Lipschitz constant of a neural network is a useful metric for robustness, generalization, and fairness evaluation.
We show strong inapproximability results for estimating Lipschitz constants of ReLU networks.
We leverage this algorithm to evaluate the tightness of competing Lipschitz estimators and the effects of regularized training on the Lipschitz constant.
arXiv Detail & Related papers (2020-03-02T22:15:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.