Fast Propagation is Better: Accelerating Single-Step Adversarial
Training via Sampling Subnetworks
- URL: http://arxiv.org/abs/2310.15444v1
- Date: Tue, 24 Oct 2023 01:36:20 GMT
- Title: Fast Propagation is Better: Accelerating Single-Step Adversarial
Training via Sampling Subnetworks
- Authors: Xiaojun Jia, Jianshu Li, Jindong Gu, Yang Bai and Xiaochun Cao
- Abstract summary: A drawback of adversarial training is the computational overhead introduced by the generation of adversarial examples.
We propose to exploit the interior building blocks of the model to improve efficiency.
Compared with previous methods, our method not only reduces the training cost but also achieves better model robustness.
- Score: 69.54774045493227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training has shown promise in building robust models against
adversarial examples. A major drawback of adversarial training is the
computational overhead introduced by the generation of adversarial examples. To
overcome this limitation, adversarial training based on single-step attacks has
been explored. Previous work improves the single-step adversarial training from
different perspectives, e.g., sample initialization, loss regularization, and
training strategy. Almost all of them treat the underlying model as a black
box. In this work, we propose to exploit the interior building blocks of the
model to improve efficiency. Specifically, we propose to dynamically sample
lightweight subnetworks as a surrogate model during training. By doing this,
both the forward and backward passes can be accelerated for efficient
adversarial training. Besides, we provide theoretical analysis to show the
model robustness can be improved by the single-step adversarial training with
sampled subnetworks. Furthermore, we propose a novel sampling strategy where
the sampling varies from layer to layer and from iteration to iteration.
Compared with previous methods, our method not only reduces the training cost
but also achieves better model robustness. Evaluations on a series of popular
datasets demonstrate the effectiveness of the proposed FB-Better. Our code has
been released at https://github.com/jiaxiaojunQAQ/FP-Better.
Related papers
- CAT:Collaborative Adversarial Training [80.55910008355505]
We propose a collaborative adversarial training framework to improve the robustness of neural networks.
Specifically, we use different adversarial training methods to train robust models and let models interact with their knowledge during the training process.
Cat achieves state-of-the-art adversarial robustness without using any additional data on CIFAR-10 under the Auto-Attack benchmark.
arXiv Detail & Related papers (2023-03-27T05:37:43Z) - Adversarial Coreset Selection for Efficient Robust Training [11.510009152620666]
We show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training.
We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2-3 times.
arXiv Detail & Related papers (2022-09-13T07:37:53Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - $\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial
Training [11.241749205970253]
We show how selecting a small subset of training data provides a more principled approach towards reducing the time complexity of robust training.
Our approach speeds up adversarial training by 2-3 times, while experiencing a small reduction in the clean and robust accuracy.
arXiv Detail & Related papers (2021-12-01T09:55:01Z) - Self-Progressing Robust Training [146.8337017922058]
Current robust training methods such as adversarial training explicitly uses an "attack" to generate adversarial examples.
We propose a new framework called SPROUT, self-progressing robust training.
Our results shed new light on scalable, effective and attack-independent robust training methods.
arXiv Detail & Related papers (2020-12-22T00:45:24Z) - Deep Ensembles for Low-Data Transfer Learning [21.578470914935938]
We study different ways of creating ensembles from pre-trained models.
We show that the nature of pre-training itself is a performant source of diversity.
We propose a practical algorithm that efficiently identifies a subset of pre-trained models for any downstream dataset.
arXiv Detail & Related papers (2020-10-14T07:59:00Z) - Efficient Robust Training via Backward Smoothing [125.91185167854262]
Adversarial training is the most effective strategy in defending against adversarial examples.
It suffers from high computational costs due to the iterative adversarial attacks in each training step.
Recent studies show that it is possible to achieve fast Adversarial Training by performing a single-step attack.
arXiv Detail & Related papers (2020-10-03T04:37:33Z) - Single-step Adversarial training with Dropout Scheduling [59.50324605982158]
We show that models trained using single-step adversarial training method learn to prevent the generation of single-step adversaries.
Models trained using proposed single-step adversarial training method are robust against both single-step and multi-step adversarial attacks.
arXiv Detail & Related papers (2020-04-18T14:14:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.