The robust way to stack and bag: the local Lipschitz way
- URL: http://arxiv.org/abs/2206.00513v1
- Date: Wed, 1 Jun 2022 14:15:12 GMT
- Title: The robust way to stack and bag: the local Lipschitz way
- Authors: Thulasi Tholeti, Sheetal Kalyani
- Abstract summary: We exploit a relationship between a neural network's local Lipschitz constant and its adversarial robustness to construct an ensemble of neural networks.
The proposed architecture is found to be more robust than a single network and traditional ensemble methods.
- Score: 13.203765985718201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research has established that the local Lipschitz constant of a neural
network directly influences its adversarial robustness. We exploit this
relationship to construct an ensemble of neural networks which not only
improves the accuracy, but also provides increased adversarial robustness. The
local Lipschitz constants for two different ensemble methods - bagging and
stacking - are derived and the architectures best suited for ensuring
adversarial robustness are deduced. The proposed ensemble architectures are
tested on MNIST and CIFAR-10 datasets in the presence of white-box attacks,
FGSM and PGD. The proposed architecture is found to be more robust than a) a
single network and b) traditional ensemble methods.
Related papers
- Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Lipschitz Bound Analysis of Neural Networks [0.0]
Lipschitz Bound Estimation is an effective method of regularizing deep neural networks to make them robust against adversarial attacks.
In this paper, we highlight the significant gap in obtaining a non-trivial Lipschitz bound certificate for Convolutional Neural Networks (CNNs)
We also show that unrolling Convolutional layers or Toeplitz matrices can be employed to convert Convolutional Neural Networks (CNNs) to a Fully Connected Network.
arXiv Detail & Related papers (2022-07-14T23:40:22Z) - Adversarial Vulnerability of Randomized Ensembles [12.082239973914326]
We show that randomized ensembles are more vulnerable to imperceptible adversarial perturbations than even standard AT models.
We propose a theoretically-sound and efficient adversarial attack algorithm (ARC) capable of compromising random ensembles even in cases where adaptive PGD fails to do so.
arXiv Detail & Related papers (2022-06-14T10:37:58Z) - Robust Binary Models by Pruning Randomly-initialized Networks [57.03100916030444]
We propose ways to obtain robust models against adversarial attacks from randomly-d binary networks.
We learn the structure of the robust model by pruning a randomly-d binary network.
Our method confirms the strong lottery ticket hypothesis in the presence of adversarial attacks.
arXiv Detail & Related papers (2022-02-03T00:05:08Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - Scalable Lipschitz Residual Networks with Convex Potential Flows [120.27516256281359]
We show that using convex potentials in a residual network gradient flow provides a built-in $1$-Lipschitz transformation.
A comprehensive set of experiments on CIFAR-10 demonstrates the scalability of our architecture and the benefit of our approach for $ell$ provable defenses.
arXiv Detail & Related papers (2021-10-25T07:12:53Z) - DSRNA: Differentiable Search of Robust Neural Architectures [11.232234265070753]
In deep learning applications, the architectures of deep neural networks are crucial in achieving high accuracy.
We propose methods to perform differentiable search of robust neural architectures.
Our methods are more robust to various norm-bound attacks than several robust NAS baselines.
arXiv Detail & Related papers (2020-12-11T04:52:54Z) - Adversarially Robust Neural Architectures [43.74185132684662]
This paper aims to improve the adversarial robustness of the network from the architecture perspective with NAS framework.
We explore the relationship among adversarial robustness, Lipschitz constant, and architecture parameters.
Our algorithm empirically achieves the best performance among all the models under various attacks on different datasets.
arXiv Detail & Related papers (2020-09-02T08:52:15Z) - Neural Ensemble Search for Uncertainty Estimation and Dataset Shift [67.57720300323928]
Ensembles of neural networks achieve superior performance compared to stand-alone networks in terms of accuracy, uncertainty calibration and robustness to dataset shift.
We propose two methods for automatically constructing ensembles with emphvarying architectures.
We show that the resulting ensembles outperform deep ensembles not only in terms of accuracy but also uncertainty calibration and robustness to dataset shift.
arXiv Detail & Related papers (2020-06-15T17:38:15Z) - A Closer Look at Accuracy vs. Robustness [94.2226357646813]
Current methods for training robust networks lead to a drop in test accuracy.
We show that real image datasets are actually separated.
We conclude that achieving robustness and accuracy in practice may require using methods that impose local Lipschitzness.
arXiv Detail & Related papers (2020-03-05T07:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.