$\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial
Training
- URL: http://arxiv.org/abs/2112.00378v1
- Date: Wed, 1 Dec 2021 09:55:01 GMT
- Title: $\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial
Training
- Authors: Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
- Abstract summary: We show how selecting a small subset of training data provides a more principled approach towards reducing the time complexity of robust training.
Our approach speeds up adversarial training by 2-3 times, while experiencing a small reduction in the clean and robust accuracy.
- Score: 11.241749205970253
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are vulnerable to adversarial attacks: adding well-crafted,
imperceptible perturbations to their input can modify their output. Adversarial
training is one of the most effective approaches in training robust models
against such attacks. However, it is much slower than vanilla training of
neural networks since it needs to construct adversarial examples for the entire
training data at every iteration, which has hampered its effectiveness.
Recently, Fast Adversarial Training was proposed that can obtain robust models
efficiently. However, the reasons behind its success are not fully understood,
and more importantly, it can only train robust models for $\ell_\infty$-bounded
attacks as it uses FGSM during training. In this paper, by leveraging the
theory of coreset selection we show how selecting a small subset of training
data provides a more principled approach towards reducing the time complexity
of robust training. Unlike existing methods, our approach can be adapted to a
wide variety of training objectives, including TRADES, $\ell_p$-PGD, and
Perceptual Adversarial Training. Our experimental results indicate that our
approach speeds up adversarial training by 2-3 times, while experiencing a
small reduction in the clean and robust accuracy.
Related papers
- Fast Propagation is Better: Accelerating Single-Step Adversarial
Training via Sampling Subnetworks [69.54774045493227]
A drawback of adversarial training is the computational overhead introduced by the generation of adversarial examples.
We propose to exploit the interior building blocks of the model to improve efficiency.
Compared with previous methods, our method not only reduces the training cost but also achieves better model robustness.
arXiv Detail & Related papers (2023-10-24T01:36:20Z) - AccelAT: A Framework for Accelerating the Adversarial Training of Deep
Neural Networks through Accuracy Gradient [12.118084418840152]
Adrial training is exploited to develop a robust Deep Neural Network (DNN) model against malicious altered data.
This paper aims at accelerating the adversarial training to enable fast development of robust DNN models against adversarial attacks.
arXiv Detail & Related papers (2022-10-13T10:31:51Z) - Adversarial Coreset Selection for Efficient Robust Training [11.510009152620666]
We show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training.
We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2-3 times.
arXiv Detail & Related papers (2022-09-13T07:37:53Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - Self-Progressing Robust Training [146.8337017922058]
Current robust training methods such as adversarial training explicitly uses an "attack" to generate adversarial examples.
We propose a new framework called SPROUT, self-progressing robust training.
Our results shed new light on scalable, effective and attack-independent robust training methods.
arXiv Detail & Related papers (2020-12-22T00:45:24Z) - Towards Understanding Fast Adversarial Training [91.8060431517248]
We conduct experiments to understand the behavior of fast adversarial training.
We show the key to its success is the ability to recover from overfitting to weak attacks.
arXiv Detail & Related papers (2020-06-04T18:19:43Z) - Fast is better than free: Revisiting adversarial training [86.11788847990783]
We show that it is possible to train empirically robust models using a much weaker and cheaper adversary.
We identify a failure mode referred to as "catastrophic overfitting" which may have caused previous attempts to use FGSM adversarial training to fail.
arXiv Detail & Related papers (2020-01-12T20:30:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.