Revisiting and Advancing Adversarial Training Through A Simple Baseline
- URL: http://arxiv.org/abs/2306.07613v2
- Date: Sun, 19 Nov 2023 14:29:46 GMT
- Title: Revisiting and Advancing Adversarial Training Through A Simple Baseline
- Authors: Hong Liu
- Abstract summary: We introduce a simple baseline approach, termed SimpleAT, that performs competitively with recent methods and mitigates robust overfitting.
We conduct extensive experiments on CIFAR-10/100 and Tiny-ImageNet, which validate the robustness of SimpleAT against state-of-the-art adversarial attackers.
Our results also reveal the connections between SimpleAT and many advanced state-of-the-art adversarial defense methods.
- Score: 7.226961695849204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we delve into the essential components of adversarial training
which is a pioneering defense technique against adversarial attacks. We
indicate that some factors such as the loss function, learning rate scheduler,
and data augmentation, which are independent of the model architecture, will
influence adversarial robustness and generalization. When these factors are
controlled for, we introduce a simple baseline approach, termed SimpleAT, that
performs competitively with recent methods and mitigates robust overfitting. We
conduct extensive experiments on CIFAR-10/100 and Tiny-ImageNet, which validate
the robustness of SimpleAT against state-of-the-art adversarial attackers such
as AutoAttack. Our results also demonstrate that SimpleAT exhibits good
performance in the presence of various image corruptions, such as those found
in the CIFAR-10-C. In addition, we empirically show that SimpleAT is capable of
reducing the variance in model predictions, which is considered the primary
contributor to robust overfitting. Our results also reveal the connections
between SimpleAT and many advanced state-of-the-art adversarial defense
methods.
Related papers
- Perturbation-Invariant Adversarial Training for Neural Ranking Models:
Improving the Effectiveness-Robustness Trade-Off [107.35833747750446]
adversarial examples can be crafted by adding imperceptible perturbations to legitimate documents.
This vulnerability raises significant concerns about their reliability and hinders the widespread deployment of NRMs.
In this study, we establish theoretical guarantees regarding the effectiveness-robustness trade-off in NRMs.
arXiv Detail & Related papers (2023-12-16T05:38:39Z) - Reducing Adversarial Training Cost with Gradient Approximation [0.3916094706589679]
We propose a new and efficient adversarial training method, adversarial training with gradient approximation (GAAT) to reduce the cost of building up robust models.
Our proposed method saves up to 60% of the training time with comparable model test accuracy on datasets.
arXiv Detail & Related papers (2023-09-18T03:55:41Z) - Resisting Adversarial Attacks in Deep Neural Networks using Diverse
Decision Boundaries [12.312877365123267]
Deep learning systems are vulnerable to crafted adversarial examples, which may be imperceptible to the human eye, but can lead the model to misclassify.
We develop a new ensemble-based solution that constructs defender models with diverse decision boundaries with respect to the original model.
We present extensive experimentations using standard image classification datasets, namely MNIST, CIFAR-10 and CIFAR-100 against state-of-the-art adversarial attacks.
arXiv Detail & Related papers (2022-08-18T08:19:26Z) - Enhancing Adversarial Training with Feature Separability [52.39305978984573]
We introduce a new concept of adversarial training graph (ATG) with which the proposed adversarial training with feature separability (ATFS) enables to boost the intra-class feature similarity and increase inter-class feature variance.
Through comprehensive experiments, we demonstrate that the proposed ATFS framework significantly improves both clean and robust performance.
arXiv Detail & Related papers (2022-05-02T04:04:23Z) - Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training [106.34722726264522]
A range of adversarial defense techniques have been proposed to mitigate the interference of adversarial noise.
Pre-processing methods may suffer from the robustness degradation effect.
A potential cause of this negative effect is that adversarial training examples are static and independent to the pre-processing model.
We propose a method called Joint Adversarial Training based Pre-processing (JATP) defense.
arXiv Detail & Related papers (2021-06-10T01:45:32Z) - Analysis and Applications of Class-wise Robustness in Adversarial
Training [92.08430396614273]
Adversarial training is one of the most effective approaches to improve model robustness against adversarial examples.
Previous works mainly focus on the overall robustness of the model, and the in-depth analysis on the role of each class involved in adversarial training is still missing.
We provide a detailed diagnosis of adversarial training on six benchmark datasets, i.e., MNIST, CIFAR-10, CIFAR-100, SVHN, STL-10 and ImageNet.
We observe that the stronger attack methods in adversarial learning achieve performance improvement mainly from a more successful attack on the vulnerable classes.
arXiv Detail & Related papers (2021-05-29T07:28:35Z) - Bag of Tricks for Adversarial Training [50.53525358778331]
Adrial training is one of the most effective strategies for promoting model robustness.
Recent benchmarks show that most of the proposed improvements on AT are less effective than simply early stopping the training procedure.
arXiv Detail & Related papers (2020-10-01T15:03:51Z) - Boosting Adversarial Training with Hypersphere Embedding [53.75693100495097]
Adversarial training is one of the most effective defenses against adversarial attacks for deep learning models.
In this work, we advocate incorporating the hypersphere embedding mechanism into the AT procedure.
We validate our methods under a wide range of adversarial attacks on the CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2020-02-20T08:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.