Gradient-Guided Dynamic Efficient Adversarial Training
- URL: http://arxiv.org/abs/2103.03076v1
- Date: Thu, 4 Mar 2021 14:57:53 GMT
- Title: Gradient-Guided Dynamic Efficient Adversarial Training
- Authors: Fu Wang, Yanghao Zhang, Yanbin Zheng, Wenjie Ruan
- Abstract summary: Adversarial training is arguably an effective but time-consuming way to train robust deep neural networks that can withstand strong adversarial attacks.
We propose the Dynamic Efficient Adversarial Training (DEAT), which gradually increases the adversarial iteration during training.
- Score: 6.980357450216633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training is arguably an effective but time-consuming way to train
robust deep neural networks that can withstand strong adversarial attacks. As a
response to the inefficiency, we propose the Dynamic Efficient Adversarial
Training (DEAT), which gradually increases the adversarial iteration during
training. Moreover, we theoretically reveal that the connection of the lower
bound of Lipschitz constant of a given network and the magnitude of its partial
derivative towards adversarial examples. Supported by this theoretical finding,
we utilize the gradient's magnitude to quantify the effectiveness of
adversarial training and determine the timing to adjust the training procedure.
This magnitude based strategy is computational friendly and easy to implement.
It is especially suited for DEAT and can also be transplanted into a wide range
of adversarial training methods. Our post-investigation suggests that
maintaining the quality of the training adversarial examples at a certain level
is essential to achieve efficient adversarial training, which may shed some
light on future studies.
Related papers
- Adversarial Coreset Selection for Efficient Robust Training [11.510009152620666]
We show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training.
We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2-3 times.
arXiv Detail & Related papers (2022-09-13T07:37:53Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Collaborative Adversarial Training [82.25340762659991]
We show that some collaborative examples, nearly perceptually indistinguishable from both adversarial and benign examples, can be utilized to enhance adversarial training.
A novel method called collaborative adversarial training (CoAT) is thus proposed to achieve new state-of-the-arts.
arXiv Detail & Related papers (2022-05-23T09:41:41Z) - Enhancing Adversarial Training with Feature Separability [52.39305978984573]
We introduce a new concept of adversarial training graph (ATG) with which the proposed adversarial training with feature separability (ATFS) enables to boost the intra-class feature similarity and increase inter-class feature variance.
Through comprehensive experiments, we demonstrate that the proposed ATFS framework significantly improves both clean and robust performance.
arXiv Detail & Related papers (2022-05-02T04:04:23Z) - Enhancing Adversarial Robustness for Deep Metric Learning [77.75152218980605]
adversarial robustness of deep metric learning models has to be improved.
In order to avoid model collapse due to excessively hard examples, the existing defenses dismiss the min-max adversarial training.
We propose Hardness Manipulation to efficiently perturb the training triplet till a specified level of hardness for adversarial training.
arXiv Detail & Related papers (2022-03-02T22:27:44Z) - $\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial
Training [11.241749205970253]
We show how selecting a small subset of training data provides a more principled approach towards reducing the time complexity of robust training.
Our approach speeds up adversarial training by 2-3 times, while experiencing a small reduction in the clean and robust accuracy.
arXiv Detail & Related papers (2021-12-01T09:55:01Z) - Towards Understanding Fast Adversarial Training [91.8060431517248]
We conduct experiments to understand the behavior of fast adversarial training.
We show the key to its success is the ability to recover from overfitting to weak attacks.
arXiv Detail & Related papers (2020-06-04T18:19:43Z) - Improving the affordability of robustness training for DNNs [11.971637253035107]
We show that the initial phase of adversarial training is redundant and can be replaced with natural training which significantly improves the computational efficiency.
We show that our proposed method can reduce the training time by a factor of up to 2.5 with comparable or better model test accuracy and generalization on various strengths of adversarial attacks.
arXiv Detail & Related papers (2020-02-11T07:29:45Z) - Efficient Adversarial Training with Transferable Adversarial Examples [58.62766224452761]
We show that there is high transferability between models from neighboring epochs in the same training process.
We propose a novel method, Adversarial Training with Transferable Adversarial Examples (ATTA) that can enhance the robustness of trained models.
arXiv Detail & Related papers (2019-12-27T03:05:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.