Towards Understanding Fast Adversarial Training
- URL: http://arxiv.org/abs/2006.03089v1
- Date: Thu, 4 Jun 2020 18:19:43 GMT
- Title: Towards Understanding Fast Adversarial Training
- Authors: Bai Li, Shiqi Wang, Suman Jana, Lawrence Carin
- Abstract summary: We conduct experiments to understand the behavior of fast adversarial training.
We show the key to its success is the ability to recover from overfitting to weak attacks.
- Score: 91.8060431517248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current neural-network-based classifiers are susceptible to adversarial
examples. The most empirically successful approach to defending against such
adversarial examples is adversarial training, which incorporates a strong
self-attack during training to enhance its robustness. This approach, however,
is computationally expensive and hence is hard to scale up. A recent work,
called fast adversarial training, has shown that it is possible to markedly
reduce computation time without sacrificing significant performance. This
approach incorporates simple self-attacks, yet it can only run for a limited
number of training epochs, resulting in sub-optimal performance. In this paper,
we conduct experiments to understand the behavior of fast adversarial training
and show the key to its success is the ability to recover from overfitting to
weak attacks. We then extend our findings to improve fast adversarial training,
demonstrating superior robust accuracy to strong adversarial training, with
much-reduced training time.
Related papers
- Revisiting and Exploring Efficient Fast Adversarial Training via LAW:
Lipschitz Regularization and Auto Weight Averaging [73.78965374696608]
We study over 10 fast adversarial training methods in terms of adversarial robustness and training costs.
We revisit the effectiveness and efficiency of fast adversarial training techniques in preventing Catastrophic Overfitting.
We propose a FGSM-based fast adversarial training method equipped with Lipschitz regularization and Auto Weight averaging.
arXiv Detail & Related papers (2023-08-22T13:50:49Z) - Improved Adversarial Training Through Adaptive Instance-wise Loss
Smoothing [5.1024659285813785]
Adversarial training has been the most successful defense against such adversarial attacks.
We propose a new adversarial training method: Instance-adaptive Smoothness Enhanced Adversarial Training.
Our method achieves state-of-the-art robustness against $ell_infty$-norm constrained attacks.
arXiv Detail & Related papers (2023-03-24T15:41:40Z) - Adversarial Coreset Selection for Efficient Robust Training [11.510009152620666]
We show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training.
We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2-3 times.
arXiv Detail & Related papers (2022-09-13T07:37:53Z) - Collaborative Adversarial Training [82.25340762659991]
We show that some collaborative examples, nearly perceptually indistinguishable from both adversarial and benign examples, can be utilized to enhance adversarial training.
A novel method called collaborative adversarial training (CoAT) is thus proposed to achieve new state-of-the-arts.
arXiv Detail & Related papers (2022-05-23T09:41:41Z) - Enhancing Adversarial Training with Feature Separability [52.39305978984573]
We introduce a new concept of adversarial training graph (ATG) with which the proposed adversarial training with feature separability (ATFS) enables to boost the intra-class feature similarity and increase inter-class feature variance.
Through comprehensive experiments, we demonstrate that the proposed ATFS framework significantly improves both clean and robust performance.
arXiv Detail & Related papers (2022-05-02T04:04:23Z) - Enhancing Adversarial Robustness for Deep Metric Learning [77.75152218980605]
adversarial robustness of deep metric learning models has to be improved.
In order to avoid model collapse due to excessively hard examples, the existing defenses dismiss the min-max adversarial training.
We propose Hardness Manipulation to efficiently perturb the training triplet till a specified level of hardness for adversarial training.
arXiv Detail & Related papers (2022-03-02T22:27:44Z) - $\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial
Training [11.241749205970253]
We show how selecting a small subset of training data provides a more principled approach towards reducing the time complexity of robust training.
Our approach speeds up adversarial training by 2-3 times, while experiencing a small reduction in the clean and robust accuracy.
arXiv Detail & Related papers (2021-12-01T09:55:01Z) - Gradient-Guided Dynamic Efficient Adversarial Training [6.980357450216633]
Adversarial training is arguably an effective but time-consuming way to train robust deep neural networks that can withstand strong adversarial attacks.
We propose the Dynamic Efficient Adversarial Training (DEAT), which gradually increases the adversarial iteration during training.
arXiv Detail & Related papers (2021-03-04T14:57:53Z) - Improving the affordability of robustness training for DNNs [11.971637253035107]
We show that the initial phase of adversarial training is redundant and can be replaced with natural training which significantly improves the computational efficiency.
We show that our proposed method can reduce the training time by a factor of up to 2.5 with comparable or better model test accuracy and generalization on various strengths of adversarial attacks.
arXiv Detail & Related papers (2020-02-11T07:29:45Z) - Efficient Adversarial Training with Transferable Adversarial Examples [58.62766224452761]
We show that there is high transferability between models from neighboring epochs in the same training process.
We propose a novel method, Adversarial Training with Transferable Adversarial Examples (ATTA) that can enhance the robustness of trained models.
arXiv Detail & Related papers (2019-12-27T03:05:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.