Chordal Sparsity for Lipschitz Constant Estimation of Deep Neural
Networks
- URL: http://arxiv.org/abs/2204.00846v2
- Date: Mon, 8 Jan 2024 06:41:37 GMT
- Title: Chordal Sparsity for Lipschitz Constant Estimation of Deep Neural
Networks
- Authors: Anton Xue, Lars Lindemann, Alexander Robey, Hamed Hassani, George J.
Pappas, and Rajeev Alur
- Abstract summary: Lipschitz constants of neural networks allow for guarantees of robustness in image classification, safety in controller design, and generalizability beyond the training data.
As calculating Lipschitz constants is NP-hard, techniques for estimating Lipschitz constants must navigate the trade-off between scalability and accuracy.
In this work, we significantly push the scalability frontier of a semidefinite programming technique known as LipSDP while achieving zero accuracy loss.
- Score: 77.82638674792292
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lipschitz constants of neural networks allow for guarantees of robustness in
image classification, safety in controller design, and generalizability beyond
the training data. As calculating Lipschitz constants is NP-hard, techniques
for estimating Lipschitz constants must navigate the trade-off between
scalability and accuracy. In this work, we significantly push the scalability
frontier of a semidefinite programming technique known as LipSDP while
achieving zero accuracy loss. We first show that LipSDP has chordal sparsity,
which allows us to derive a chordally sparse formulation that we call
Chordal-LipSDP. The key benefit is that the main computational bottleneck of
LipSDP, a large semidefinite constraint, is now decomposed into an equivalent
collection of smaller ones: allowing Chordal-LipSDP to outperform LipSDP
particularly as the network depth grows. Moreover, our formulation uses a
tunable sparsity parameter that enables one to gain tighter estimates without
incurring a significant computational cost. We illustrate the scalability of
our approach through extensive numerical experiments.
Related papers
- Scalable Lipschitz Estimation for CNNs [3.8125535078871127]
Estimating the Lipschitz constant of deep neural networks is of growing interest.
CNNs underpin much of the recent success in computer vision related applications.
We propose a novel method to accelerate Lipschitz constant estimation for CNNs.
arXiv Detail & Related papers (2024-03-27T14:28:44Z) - Novel Quadratic Constraints for Extending LipSDP beyond Slope-Restricted
Activations [52.031701581294804]
Lipschitz bounds for neural networks can be computed with upper time preservation guarantees.
Our paper bridges the gap and extends Lipschitz beyond slope-restricted activation functions.
Our proposed analysis is general and provides a unified approach for estimating $ell$ and $ell_infty$ Lipschitz bounds.
arXiv Detail & Related papers (2024-01-25T09:23:31Z) - DP-SGD Without Clipping: The Lipschitz Neural Network Way [5.922390405022253]
State-of-the-art approaches for training Differentially Private (DP) Deep Neural Networks (DNN)
By bounding the Lipschitz constant of each layer with respect to its parameters, we prove that we can train these networks with privacy guarantees.
Our analysis not only allows the computation of the aforementioned sensitivities at scale, but also provides guidance on how to maximize the gradient-to-noise ratio for fixed privacy guarantees.
arXiv Detail & Related papers (2023-05-25T16:05:46Z) - Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness [33.72713778392896]
We study certified $ell_infty$ from a novel perspective of representing Boolean functions.
We develop a unified Lipschitz network that generalizes prior works, and design a practical version that can be efficiently trained.
arXiv Detail & Related papers (2022-10-04T17:55:27Z) - Lipschitz Continuity Retained Binary Neural Network [52.17734681659175]
We introduce the Lipschitz continuity as the rigorous criteria to define the model robustness for BNN.
We then propose to retain the Lipschitz continuity as a regularization term to improve the model robustness.
Our experiments prove that our BNN-specific regularization method can effectively strengthen the robustness of BNN.
arXiv Detail & Related papers (2022-07-13T22:55:04Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - LipBaB: Computing exact Lipschitz constant of ReLU networks [0.0]
LipBaB is a framework to compute certified bounds of the local Lipschitz constant of deep neural networks.
Our algorithm can provide provably exact computation of the Lipschitz constant for any p-norm.
arXiv Detail & Related papers (2021-05-12T08:06:11Z) - On Lipschitz Regularization of Convolutional Layers using Toeplitz
Matrix Theory [77.18089185140767]
Lipschitz regularity is established as a key property of modern deep learning.
computing the exact value of the Lipschitz constant of a neural network is known to be NP-hard.
We introduce a new upper bound for convolutional layers that is both tight and easy to compute.
arXiv Detail & Related papers (2020-06-15T13:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.