Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness
- URL: http://arxiv.org/abs/2210.01787v1
- Date: Tue, 4 Oct 2022 17:55:27 GMT
- Title: Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness
- Authors: Bohang Zhang, Du Jiang, Di He, Liwei Wang
- Abstract summary: We study certified $ell_infty$ from a novel perspective of representing Boolean functions.
We develop a unified Lipschitz network that generalizes prior works, and design a practical version that can be efficiently trained.
- Score: 33.72713778392896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing neural networks with bounded Lipschitz constant is a promising way
to obtain certifiably robust classifiers against adversarial examples. However,
the relevant progress for the important $\ell_\infty$ perturbation setting is
rather limited, and a principled understanding of how to design expressive
$\ell_\infty$ Lipschitz networks is still lacking. In this paper, we bridge the
gap by studying certified $\ell_\infty$ robustness from a novel perspective of
representing Boolean functions. We derive two fundamental impossibility results
that hold for any standard Lipschitz network: one for robust classification on
finite datasets, and the other for Lipschitz function approximation. These
results identify that networks built upon norm-bounded affine layers and
Lipschitz activations intrinsically lose expressive power even in the
two-dimensional case, and shed light on how recently proposed Lipschitz
networks (e.g., GroupSort and $\ell_\infty$-distance nets) bypass these
impossibilities by leveraging order statistic functions. Finally, based on
these insights, we develop a unified Lipschitz network that generalizes prior
works, and design a practical version that can be efficiently trained (making
certified robust training free). Extensive experiments show that our approach
is scalable, efficient, and consistently yields better certified robustness
across multiple datasets and perturbation radii than prior Lipschitz networks.
Related papers
- Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Chordal Sparsity for Lipschitz Constant Estimation of Deep Neural
Networks [77.82638674792292]
Lipschitz constants of neural networks allow for guarantees of robustness in image classification, safety in controller design, and generalizability beyond the training data.
As calculating Lipschitz constants is NP-hard, techniques for estimating Lipschitz constants must navigate the trade-off between scalability and accuracy.
In this work, we significantly push the scalability frontier of a semidefinite programming technique known as LipSDP while achieving zero accuracy loss.
arXiv Detail & Related papers (2022-04-02T11:57:52Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - Scalable Lipschitz Residual Networks with Convex Potential Flows [120.27516256281359]
We show that using convex potentials in a residual network gradient flow provides a built-in $1$-Lipschitz transformation.
A comprehensive set of experiments on CIFAR-10 demonstrates the scalability of our architecture and the benefit of our approach for $ell$ provable defenses.
arXiv Detail & Related papers (2021-10-25T07:12:53Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Lipschitz Bounded Equilibrium Networks [3.2872586139884623]
This paper introduces new parameterizations of equilibrium neural networks, i.e. networks defined by implicit equations.
The new parameterization admits a Lipschitz bound during training via unconstrained optimization.
In image classification experiments we show that the Lipschitz bounds are very accurate and improve robustness to adversarial attacks.
arXiv Detail & Related papers (2020-10-05T01:00:40Z) - On Lipschitz Regularization of Convolutional Layers using Toeplitz
Matrix Theory [77.18089185140767]
Lipschitz regularity is established as a key property of modern deep learning.
computing the exact value of the Lipschitz constant of a neural network is known to be NP-hard.
We introduce a new upper bound for convolutional layers that is both tight and easy to compute.
arXiv Detail & Related papers (2020-06-15T13:23:34Z) - Approximating Lipschitz continuous functions with GroupSort neural
networks [3.416170716497814]
Recent advances in adversarial attacks and Wasserstein GANs have advocated for use of neural networks with restricted Lipschitz constants.
We show in particular how these networks can represent any Lipschitz continuous piecewise linear functions.
We also prove that they are well-suited for approximating Lipschitz continuous functions and exhibit upper bounds on both the depth and size.
arXiv Detail & Related papers (2020-06-09T13:37:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.