Focus on Hiders: Exploring Hidden Threats for Enhancing Adversarial
Training
- URL: http://arxiv.org/abs/2312.07067v1
- Date: Tue, 12 Dec 2023 08:41:18 GMT
- Title: Focus on Hiders: Exploring Hidden Threats for Enhancing Adversarial
Training
- Authors: Qian Li, Yuxiao Hu, Yinpeng Dong, Dongxiao Zhang, Yuntian Chen
- Abstract summary: We propose a generalized adversarial training algorithm called Hider-Focused Adversarial Training (HFAT)
HFAT combines the optimization directions of standard adversarial training and prevention hiders.
We demonstrate the effectiveness of our method based on extensive experiments.
- Score: 20.1991376813843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training is often formulated as a min-max problem, however,
concentrating only on the worst adversarial examples causes alternating
repetitive confusion of the model, i.e., previously defended or correctly
classified samples are not defensible or accurately classifiable in subsequent
adversarial training. We characterize such non-ignorable samples as "hiders",
which reveal the hidden high-risk regions within the secure area obtained
through adversarial training and prevent the model from finding the real worst
cases. We demand the model to prevent hiders when defending against adversarial
examples for improving accuracy and robustness simultaneously. By rethinking
and redefining the min-max optimization problem for adversarial training, we
propose a generalized adversarial training algorithm called Hider-Focused
Adversarial Training (HFAT). HFAT introduces the iterative evolution
optimization strategy to simplify the optimization problem and employs an
auxiliary model to reveal hiders, effectively combining the optimization
directions of standard adversarial training and prevention hiders. Furthermore,
we introduce an adaptive weighting mechanism that facilitates the model in
adaptively adjusting its focus between adversarial examples and hiders during
different training periods. We demonstrate the effectiveness of our method
based on extensive experiments, and ensure that HFAT can provide higher
robustness and accuracy.
Related papers
- Dynamic Label Adversarial Training for Deep Learning Robustness Against Adversarial Attacks [11.389689242531327]
Adversarial training is one of the most effective methods for enhancing model robustness.
Previous approaches primarily use static ground truth for adversarial training, but this often causes robust overfitting.
We propose a dynamic label adversarial training (DYNAT) algorithm that enables the target model to gain robustness from the guide model's decisions.
arXiv Detail & Related papers (2024-08-23T14:25:12Z) - Improved Adversarial Training Through Adaptive Instance-wise Loss
Smoothing [5.1024659285813785]
Adversarial training has been the most successful defense against such adversarial attacks.
We propose a new adversarial training method: Instance-adaptive Smoothness Enhanced Adversarial Training.
Our method achieves state-of-the-art robustness against $ell_infty$-norm constrained attacks.
arXiv Detail & Related papers (2023-03-24T15:41:40Z) - Vanilla Feature Distillation for Improving the Accuracy-Robustness
Trade-Off in Adversarial Training [37.5115141623558]
We propose a Vanilla Feature Distillation Adversarial Training (VFD-Adv) to guide adversarial training towards higher accuracy.
A key advantage of our method is that it can be universally adapted to and boost existing works.
arXiv Detail & Related papers (2022-06-05T11:57:10Z) - Enhancing Adversarial Training with Feature Separability [52.39305978984573]
We introduce a new concept of adversarial training graph (ATG) with which the proposed adversarial training with feature separability (ATFS) enables to boost the intra-class feature similarity and increase inter-class feature variance.
Through comprehensive experiments, we demonstrate that the proposed ATFS framework significantly improves both clean and robust performance.
arXiv Detail & Related papers (2022-05-02T04:04:23Z) - On the Convergence and Robustness of Adversarial Training [134.25999006326916]
Adrial training with Project Gradient Decent (PGD) is amongst the most effective.
We propose a textitdynamic training strategy to increase the convergence quality of the generated adversarial examples.
Our theoretical and empirical results show the effectiveness of the proposed method.
arXiv Detail & Related papers (2021-12-15T17:54:08Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training [106.34722726264522]
A range of adversarial defense techniques have been proposed to mitigate the interference of adversarial noise.
Pre-processing methods may suffer from the robustness degradation effect.
A potential cause of this negative effect is that adversarial training examples are static and independent to the pre-processing model.
We propose a method called Joint Adversarial Training based Pre-processing (JATP) defense.
arXiv Detail & Related papers (2021-06-10T01:45:32Z) - Self-Progressing Robust Training [146.8337017922058]
Current robust training methods such as adversarial training explicitly uses an "attack" to generate adversarial examples.
We propose a new framework called SPROUT, self-progressing robust training.
Our results shed new light on scalable, effective and attack-independent robust training methods.
arXiv Detail & Related papers (2020-12-22T00:45:24Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.