Adaptive perturbation adversarial training: based on reinforcement
learning
- URL: http://arxiv.org/abs/2108.13239v1
- Date: Mon, 30 Aug 2021 13:49:55 GMT
- Title: Adaptive perturbation adversarial training: based on reinforcement
learning
- Authors: Zhishen Nie, Ying Lin, Sp Ren, Lan Zhang
- Abstract summary: One of the shortcomings of adversarial training is that it will reduce the recognition accuracy of normal samples.
Adaptive adversarial training is proposed to alleviate this problem.
It uses marginal adversarial samples that are close to the decision boundary but does not cross the decision boundary for adversarial training.
- Score: 9.563820241076103
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial training has become the primary method to defend against
adversarial samples. However, it is hard to practically apply due to many
shortcomings. One of the shortcomings of adversarial training is that it will
reduce the recognition accuracy of normal samples. Adaptive perturbation
adversarial training is proposed to alleviate this problem. It uses marginal
adversarial samples that are close to the decision boundary but does not cross
the decision boundary for adversarial training, which improves the accuracy of
model recognition while maintaining the robustness of the model. However,
searching for marginal adversarial samples brings additional computational
costs. This paper proposes a method for finding marginal adversarial samples
based on reinforcement learning, and combines it with the latest fast
adversarial training technology, which effectively speeds up training process
and reduces training costs.
Related papers
- Focus on Hiders: Exploring Hidden Threats for Enhancing Adversarial
Training [20.1991376813843]
We propose a generalized adversarial training algorithm called Hider-Focused Adversarial Training (HFAT)
HFAT combines the optimization directions of standard adversarial training and prevention hiders.
We demonstrate the effectiveness of our method based on extensive experiments.
arXiv Detail & Related papers (2023-12-12T08:41:18Z) - Fast Propagation is Better: Accelerating Single-Step Adversarial
Training via Sampling Subnetworks [69.54774045493227]
A drawback of adversarial training is the computational overhead introduced by the generation of adversarial examples.
We propose to exploit the interior building blocks of the model to improve efficiency.
Compared with previous methods, our method not only reduces the training cost but also achieves better model robustness.
arXiv Detail & Related papers (2023-10-24T01:36:20Z) - Revisiting and Exploring Efficient Fast Adversarial Training via LAW:
Lipschitz Regularization and Auto Weight Averaging [73.78965374696608]
We study over 10 fast adversarial training methods in terms of adversarial robustness and training costs.
We revisit the effectiveness and efficiency of fast adversarial training techniques in preventing Catastrophic Overfitting.
We propose a FGSM-based fast adversarial training method equipped with Lipschitz regularization and Auto Weight averaging.
arXiv Detail & Related papers (2023-08-22T13:50:49Z) - Adversarial Training Should Be Cast as a Non-Zero-Sum Game [121.95628660889628]
Two-player zero-sum paradigm of adversarial training has not engendered sufficient levels of robustness.
We show that the commonly used surrogate-based relaxation used in adversarial training algorithms voids all guarantees on robustness.
A novel non-zero-sum bilevel formulation of adversarial training yields a framework that matches and in some cases outperforms state-of-the-art attacks.
arXiv Detail & Related papers (2023-06-19T16:00:48Z) - Improved Adversarial Training Through Adaptive Instance-wise Loss
Smoothing [5.1024659285813785]
Adversarial training has been the most successful defense against such adversarial attacks.
We propose a new adversarial training method: Instance-adaptive Smoothness Enhanced Adversarial Training.
Our method achieves state-of-the-art robustness against $ell_infty$-norm constrained attacks.
arXiv Detail & Related papers (2023-03-24T15:41:40Z) - Boundary Adversarial Examples Against Adversarial Overfitting [4.391102490444538]
adversarial training approaches suffer from robust overfitting where the robust accuracy decreases when models are adversarially trained for too long.
Several mitigation approaches including early stopping, temporal ensembling and weight memorizations have been proposed to mitigate the effect of robust overfitting.
In this paper, we investigate if these mitigation approaches are complimentary to each other in improving adversarial training performance.
arXiv Detail & Related papers (2022-11-25T13:16:53Z) - Adversarial Coreset Selection for Efficient Robust Training [11.510009152620666]
We show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training.
We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2-3 times.
arXiv Detail & Related papers (2022-09-13T07:37:53Z) - Towards Understanding Fast Adversarial Training [91.8060431517248]
We conduct experiments to understand the behavior of fast adversarial training.
We show the key to its success is the ability to recover from overfitting to weak attacks.
arXiv Detail & Related papers (2020-06-04T18:19:43Z) - Single-step Adversarial training with Dropout Scheduling [59.50324605982158]
We show that models trained using single-step adversarial training method learn to prevent the generation of single-step adversaries.
Models trained using proposed single-step adversarial training method are robust against both single-step and multi-step adversarial attacks.
arXiv Detail & Related papers (2020-04-18T14:14:00Z) - Improving the affordability of robustness training for DNNs [11.971637253035107]
We show that the initial phase of adversarial training is redundant and can be replaced with natural training which significantly improves the computational efficiency.
We show that our proposed method can reduce the training time by a factor of up to 2.5 with comparable or better model test accuracy and generalization on various strengths of adversarial attacks.
arXiv Detail & Related papers (2020-02-11T07:29:45Z) - Efficient Adversarial Training with Transferable Adversarial Examples [58.62766224452761]
We show that there is high transferability between models from neighboring epochs in the same training process.
We propose a novel method, Adversarial Training with Transferable Adversarial Examples (ATTA) that can enhance the robustness of trained models.
arXiv Detail & Related papers (2019-12-27T03:05:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.