Exploring Model Robustness with Adaptive Networks and Improved
Adversarial Training
- URL: http://arxiv.org/abs/2006.00387v1
- Date: Sat, 30 May 2020 23:23:56 GMT
- Title: Exploring Model Robustness with Adaptive Networks and Improved
Adversarial Training
- Authors: Zheng Xu, Ali Shafahi, Tom Goldstein
- Abstract summary: We propose a conditional normalization module to adapt networks when conditioned on input samples.
Our adaptive networks, once adversarially trained, can outperform their non-adaptive counterparts on both clean validation accuracy and robustness.
- Score: 56.82000424924979
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training has proven to be effective in hardening networks against
adversarial examples. However, the gained robustness is limited by network
capacity and number of training samples. Consequently, to build more robust
models, it is common practice to train on widened networks with more
parameters. To boost robustness, we propose a conditional normalization module
to adapt networks when conditioned on input samples. Our adaptive networks,
once adversarially trained, can outperform their non-adaptive counterparts on
both clean validation accuracy and robustness. Our method is objective agnostic
and consistently improves both the conventional adversarial training objective
and the TRADES objective. Our adaptive networks also outperform larger widened
non-adaptive architectures that have 1.5 times more parameters. We further
introduce several practical ``tricks'' in adversarial training to improve
robustness and empirically verify their efficiency.
Related papers
- CAT:Collaborative Adversarial Training [80.55910008355505]
We propose a collaborative adversarial training framework to improve the robustness of neural networks.
Specifically, we use different adversarial training methods to train robust models and let models interact with their knowledge during the training process.
Cat achieves state-of-the-art adversarial robustness without using any additional data on CIFAR-10 under the Auto-Attack benchmark.
arXiv Detail & Related papers (2023-03-27T05:37:43Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Sparsity Winning Twice: Better Robust Generalization from More Efficient
Training [94.92954973680914]
We introduce two alternatives for sparse adversarial training: (i) static sparsity and (ii) dynamic sparsity.
We find both methods to yield win-win: substantially shrinking the robust generalization gap and alleviating the robust overfitting.
Our approaches can be combined with existing regularizers, establishing new state-of-the-art results in adversarial training.
arXiv Detail & Related papers (2022-02-20T15:52:08Z) - Robust Binary Models by Pruning Randomly-initialized Networks [57.03100916030444]
We propose ways to obtain robust models against adversarial attacks from randomly-d binary networks.
We learn the structure of the robust model by pruning a randomly-d binary network.
Our method confirms the strong lottery ticket hypothesis in the presence of adversarial attacks.
arXiv Detail & Related papers (2022-02-03T00:05:08Z) - $\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial
Training [11.241749205970253]
We show how selecting a small subset of training data provides a more principled approach towards reducing the time complexity of robust training.
Our approach speeds up adversarial training by 2-3 times, while experiencing a small reduction in the clean and robust accuracy.
arXiv Detail & Related papers (2021-12-01T09:55:01Z) - Fast Training of Deep Neural Networks Robust to Adversarial
Perturbations [0.0]
We show that a fast approximation to adversarial training shows promise for reducing training time and maintaining robustness.
Fast adversarial training is a promising approach that will provide increased security and explainability in machine learning applications.
arXiv Detail & Related papers (2020-07-08T00:35:39Z) - Improved Adversarial Training via Learned Optimizer [101.38877975769198]
We propose a framework to improve the robustness of adversarial training models.
By co-training's parameters model's weights, the proposed framework consistently improves robustness and steps adaptively for update directions.
arXiv Detail & Related papers (2020-04-25T20:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.