Lipschitz constant estimation of Neural Networks via sparse polynomial
optimization
- URL: http://arxiv.org/abs/2004.08688v1
- Date: Sat, 18 Apr 2020 18:55:02 GMT
- Title: Lipschitz constant estimation of Neural Networks via sparse polynomial
optimization
- Authors: Fabian Latorre, Paul Rolland, Volkan Cevher
- Abstract summary: LiPopt is a framework for computing increasingly tighter upper bounds on the Lipschitz constant of neural networks.
We show how to use the sparse connectivity of a network, to significantly reduce the complexity.
We conduct experiments on networks with random weights as well as networks trained on MNIST.
- Score: 47.596834444042685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce LiPopt, a polynomial optimization framework for computing
increasingly tighter upper bounds on the Lipschitz constant of neural networks.
The underlying optimization problems boil down to either linear (LP) or
semidefinite (SDP) programming. We show how to use the sparse connectivity of a
network, to significantly reduce the complexity of computation. This is
specially useful for convolutional as well as pruned neural networks. We
conduct experiments on networks with random weights as well as networks trained
on MNIST, showing that in the particular case of the $\ell_\infty$-Lipschitz
constant, our approach yields superior estimates, compared to baselines
available in the literature.
Related papers
- Improving Lipschitz-Constrained Neural Networks by Learning Activation
Functions [14.378778606939665]
Lipschitz-constrained neural networks have several advantages over unconstrained ones and can be applied to a variety of problems.
We show that neural networks with learnable 1-Lipschitz linear splines are known to be more expressive.
Our numerical experiments show that our trained networks compare favorably with existing 1-Lipschitz neural architectures.
arXiv Detail & Related papers (2022-10-28T15:56:55Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Lipschitz Bound Analysis of Neural Networks [0.0]
Lipschitz Bound Estimation is an effective method of regularizing deep neural networks to make them robust against adversarial attacks.
In this paper, we highlight the significant gap in obtaining a non-trivial Lipschitz bound certificate for Convolutional Neural Networks (CNNs)
We also show that unrolling Convolutional layers or Toeplitz matrices can be employed to convert Convolutional Neural Networks (CNNs) to a Fully Connected Network.
arXiv Detail & Related papers (2022-07-14T23:40:22Z) - Generalization Error Bounds for Iterative Recovery Algorithms Unfolded
as Neural Networks [6.173968909465726]
We introduce a general class of neural networks suitable for sparse reconstruction from few linear measurements.
By allowing a wide range of degrees of weight-sharing between the layers, we enable a unified analysis for very different neural network types.
arXiv Detail & Related papers (2021-12-08T16:17:33Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - Path Regularization: A Convexity and Sparsity Inducing Regularization
for Parallel ReLU Networks [75.33431791218302]
We study the training problem of deep neural networks and introduce an analytic approach to unveil hidden convexity in the optimization landscape.
We consider a deep parallel ReLU network architecture, which also includes standard deep networks and ResNets as its special cases.
arXiv Detail & Related papers (2021-10-18T18:00:36Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Certifying Incremental Quadratic Constraints for Neural Networks via
Convex Optimization [2.388501293246858]
We propose a convex program to certify incremental quadratic constraints on the map of neural networks over a region of interest.
certificates can capture several useful properties such as (local) Lipschitz continuity, one-sided Lipschitz continuity, invertibility, and contraction.
arXiv Detail & Related papers (2020-12-10T21:15:00Z) - On Lipschitz Regularization of Convolutional Layers using Toeplitz
Matrix Theory [77.18089185140767]
Lipschitz regularity is established as a key property of modern deep learning.
computing the exact value of the Lipschitz constant of a neural network is known to be NP-hard.
We introduce a new upper bound for convolutional layers that is both tight and easy to compute.
arXiv Detail & Related papers (2020-06-15T13:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.