Adversarial Coreset Selection for Efficient Robust Training
- URL: http://arxiv.org/abs/2209.05785v2
- Date: Tue, 8 Aug 2023 06:06:35 GMT
- Title: Adversarial Coreset Selection for Efficient Robust Training
- Authors: Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
- Abstract summary: We show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training.
We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2-3 times.
- Score: 11.510009152620666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are vulnerable to adversarial attacks: adding well-crafted,
imperceptible perturbations to their input can modify their output. Adversarial
training is one of the most effective approaches to training robust models
against such attacks. Unfortunately, this method is much slower than vanilla
training of neural networks since it needs to construct adversarial examples
for the entire training data at every iteration. By leveraging the theory of
coreset selection, we show how selecting a small subset of training data
provides a principled approach to reducing the time complexity of robust
training. To this end, we first provide convergence guarantees for adversarial
coreset selection. In particular, we show that the convergence bound is
directly related to how well our coresets can approximate the gradient computed
over the entire training data. Motivated by our theoretical analysis, we
propose using this gradient approximation error as our adversarial coreset
selection objective to reduce the training set size effectively. Once built, we
run adversarial training over this subset of the training data. Unlike existing
methods, our approach can be adapted to a wide variety of training objectives,
including TRADES, $\ell_p$-PGD, and Perceptual Adversarial Training. We conduct
extensive experiments to demonstrate that our approach speeds up adversarial
training by 2-3 times while experiencing a slight degradation in the clean and
robust accuracy.
Related papers
- Fast Propagation is Better: Accelerating Single-Step Adversarial
Training via Sampling Subnetworks [69.54774045493227]
A drawback of adversarial training is the computational overhead introduced by the generation of adversarial examples.
We propose to exploit the interior building blocks of the model to improve efficiency.
Compared with previous methods, our method not only reduces the training cost but also achieves better model robustness.
arXiv Detail & Related papers (2023-10-24T01:36:20Z) - Adversarial Training Should Be Cast as a Non-Zero-Sum Game [121.95628660889628]
Two-player zero-sum paradigm of adversarial training has not engendered sufficient levels of robustness.
We show that the commonly used surrogate-based relaxation used in adversarial training algorithms voids all guarantees on robustness.
A novel non-zero-sum bilevel formulation of adversarial training yields a framework that matches and in some cases outperforms state-of-the-art attacks.
arXiv Detail & Related papers (2023-06-19T16:00:48Z) - CAT:Collaborative Adversarial Training [80.55910008355505]
We propose a collaborative adversarial training framework to improve the robustness of neural networks.
Specifically, we use different adversarial training methods to train robust models and let models interact with their knowledge during the training process.
Cat achieves state-of-the-art adversarial robustness without using any additional data on CIFAR-10 under the Auto-Attack benchmark.
arXiv Detail & Related papers (2023-03-27T05:37:43Z) - Do we need entire training data for adversarial training? [2.995087247817663]
We show that we can decrease the training time for any adversarial training algorithm by using only a subset of training data for adversarial training.
We perform adversarial training on the adversarially-prone subset and mix it with vanilla training performed on the entire dataset.
Our results show that when our method-agnostic approach is plugged into FGSM, we achieve a speedup of 3.52x on MNIST and 1.98x on the CIFAR-10 dataset with comparable robust accuracy.
arXiv Detail & Related papers (2023-03-10T23:21:05Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - $\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial
Training [11.241749205970253]
We show how selecting a small subset of training data provides a more principled approach towards reducing the time complexity of robust training.
Our approach speeds up adversarial training by 2-3 times, while experiencing a small reduction in the clean and robust accuracy.
arXiv Detail & Related papers (2021-12-01T09:55:01Z) - Self-Progressing Robust Training [146.8337017922058]
Current robust training methods such as adversarial training explicitly uses an "attack" to generate adversarial examples.
We propose a new framework called SPROUT, self-progressing robust training.
Our results shed new light on scalable, effective and attack-independent robust training methods.
arXiv Detail & Related papers (2020-12-22T00:45:24Z) - CAT: Customized Adversarial Training for Improved Robustness [142.3480998034692]
We propose a new algorithm, named Customized Adversarial Training (CAT), which adaptively customizes the perturbation level and the corresponding label for each training sample in adversarial training.
We show that the proposed algorithm achieves better clean and robust accuracy than previous adversarial training methods through extensive experiments.
arXiv Detail & Related papers (2020-02-17T06:13:05Z) - Improving the affordability of robustness training for DNNs [11.971637253035107]
We show that the initial phase of adversarial training is redundant and can be replaced with natural training which significantly improves the computational efficiency.
We show that our proposed method can reduce the training time by a factor of up to 2.5 with comparable or better model test accuracy and generalization on various strengths of adversarial attacks.
arXiv Detail & Related papers (2020-02-11T07:29:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.