Certified Robustness via Dynamic Margin Maximization and Improved
Lipschitz Regularization
- URL: http://arxiv.org/abs/2310.00116v3
- Date: Tue, 12 Mar 2024 18:57:37 GMT
- Title: Certified Robustness via Dynamic Margin Maximization and Improved
Lipschitz Regularization
- Authors: Mahyar Fazlyab, Taha Entesari, Aniket Roy, Rama Chellappa
- Abstract summary: We develop a robust training algorithm to increase the margin in the output (logit) space while regularizing the Lipschitz constant of the model along vulnerable directions.
The relative accuracy of the bounds prevents excessive regularization and allows for more direct manipulation of the decision boundary.
Experiments on the MNIST, CIFAR-10, and Tiny-ImageNet data sets verify that our proposed algorithm obtains competitively improved results compared to the state-of-the-art.
- Score: 43.98504250013897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To improve the robustness of deep classifiers against adversarial
perturbations, many approaches have been proposed, such as designing new
architectures with better robustness properties (e.g., Lipschitz-capped
networks), or modifying the training process itself (e.g., min-max
optimization, constrained learning, or regularization). These approaches,
however, might not be effective at increasing the margin in the input (feature)
space. As a result, there has been an increasing interest in developing
training procedures that can directly manipulate the decision boundary in the
input space. In this paper, we build upon recent developments in this category
by developing a robust training algorithm whose objective is to increase the
margin in the output (logit) space while regularizing the Lipschitz constant of
the model along vulnerable directions. We show that these two objectives can
directly promote larger margins in the input space. To this end, we develop a
scalable method for calculating guaranteed differentiable upper bounds on the
Lipschitz constant of neural networks accurately and efficiently. The relative
accuracy of the bounds prevents excessive regularization and allows for more
direct manipulation of the decision boundary. Furthermore, our Lipschitz
bounding algorithm exploits the monotonicity and Lipschitz continuity of the
activation layers, and the resulting bounds can be used to design new layers
with controllable bounds on their Lipschitz constant. Experiments on the MNIST,
CIFAR-10, and Tiny-ImageNet data sets verify that our proposed algorithm
obtains competitively improved results compared to the state-of-the-art.
Related papers
- Achieving Constraints in Neural Networks: A Stochastic Augmented
Lagrangian Approach [49.1574468325115]
Regularizing Deep Neural Networks (DNNs) is essential for improving generalizability and preventing overfitting.
We propose a novel approach to DNN regularization by framing the training process as a constrained optimization problem.
We employ the Augmented Lagrangian (SAL) method to achieve a more flexible and efficient regularization mechanism.
arXiv Detail & Related papers (2023-10-25T13:55:35Z) - Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram
Iteration [122.51142131506639]
We introduce a precise, fast, and differentiable upper bound for the spectral norm of convolutional layers using circulant matrix theory.
We show through a comprehensive set of experiments that our approach outperforms other state-of-the-art methods in terms of precision, computational cost, and scalability.
It proves highly effective for the Lipschitz regularization of convolutional neural networks, with competitive results against concurrent approaches.
arXiv Detail & Related papers (2023-05-25T15:32:21Z) - Efficiently Computing Local Lipschitz Constants of Neural Networks via
Bound Propagation [79.13041340708395]
Lipschitz constants are connected to many properties of neural networks, such as robustness, fairness, and generalization.
Existing methods for computing Lipschitz constants either produce relatively loose upper bounds or are limited to small networks.
We develop an efficient framework for computing the $ell_infty$ local Lipschitz constant of a neural network by tightly upper bounding the norm of Clarke Jacobian.
arXiv Detail & Related papers (2022-10-13T22:23:22Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - On Lipschitz Regularization of Convolutional Layers using Toeplitz
Matrix Theory [77.18089185140767]
Lipschitz regularity is established as a key property of modern deep learning.
computing the exact value of the Lipschitz constant of a neural network is known to be NP-hard.
We introduce a new upper bound for convolutional layers that is both tight and easy to compute.
arXiv Detail & Related papers (2020-06-15T13:23:34Z) - Lipschitz Bounds and Provably Robust Training by Laplacian Smoothing [7.4769019455423855]
We formulate the adversarially robust learning problem as one of loss minimization with a Lipschitz constraint.
We show that the saddle point of the associated Lagrangian is characterized by a Poisson equation with weighted Laplace operator.
We design a provably robust training scheme using graph-based discretization of the input space and a primal-dual algorithm to converge to the Lagrangian's saddle point.
arXiv Detail & Related papers (2020-06-05T22:02:21Z) - Training robust neural networks using Lipschitz bounds [0.0]
neural networks (NNs) are hardly used in safety-critical applications.
One measure of robustness to adversarial perturbations is the Lipschitz constant of the input-output map defined by an NN.
We propose a framework to train multi-layer NNs while at the same time encouraging robustness by keeping their Lipschitz constant small.
arXiv Detail & Related papers (2020-05-06T16:07:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.