Fixing Data Augmentation to Improve Adversarial Robustness
- URL: http://arxiv.org/abs/2103.01946v1
- Date: Tue, 2 Mar 2021 18:58:33 GMT
- Title: Fixing Data Augmentation to Improve Adversarial Robustness
- Authors: Sylvestre-Alvise Rebuffi, Sven Gowal, Dan A. Calian, Florian Stimberg,
Olivia Wiles, Timothy Mann
- Abstract summary: Adversarial training suffers from robust overfitting, a phenomenon where the robust test accuracy starts to decrease during training.
In this paper, we focus on both adversarials-driven and data-driven augmentations as a means to reduce robust overfitting.
We show large absolute improvements of +7.06% and +5.88% in robust accuracy compared to previous state-of-the-art methods.
- Score: 21.485435979018256
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training suffers from robust overfitting, a phenomenon where the
robust test accuracy starts to decrease during training. In this paper, we
focus on both heuristics-driven and data-driven augmentations as a means to
reduce robust overfitting. First, we demonstrate that, contrary to previous
findings, when combined with model weight averaging, data augmentation can
significantly boost robust accuracy. Second, we explore how state-of-the-art
generative models can be leveraged to artificially increase the size of the
training set and further improve adversarial robustness. Finally, we evaluate
our approach on CIFAR-10 against $\ell_\infty$ and $\ell_2$ norm-bounded
perturbations of size $\epsilon = 8/255$ and $\epsilon = 128/255$,
respectively. We show large absolute improvements of +7.06% and +5.88% in
robust accuracy compared to previous state-of-the-art methods. In particular,
against $\ell_\infty$ norm-bounded perturbations of size $\epsilon = 8/255$,
our model reaches 64.20% robust accuracy without using any external data,
beating most prior works that use external data.
Related papers
- PUMA: margin-based data pruning [51.12154122266251]
We focus on data pruning, where some training samples are removed based on the distance to the model classification boundary (i.e., margin)
We propose PUMA, a new data pruning strategy that computes the margin using DeepFool.
We show that PUMA can be used on top of the current state-of-the-art methodology in robustness, and it is able to significantly improve the model performance unlike the existing data pruning strategies.
arXiv Detail & Related papers (2024-05-10T08:02:20Z) - Breaking Boundaries: Balancing Performance and Robustness in Deep
Wireless Traffic Forecasting [11.029214459961114]
Balancing the trade-off between accuracy and robustness is a long-standing challenge in time series forecasting.
We study a wide array of perturbation scenarios and propose novel defense mechanisms against adversarial attacks using real-world telecom data.
arXiv Detail & Related papers (2023-11-16T11:10:38Z) - Better Diffusion Models Further Improve Adversarial Training [97.44991845907708]
It has been recognized that the data generated by the diffusion probabilistic model (DDPM) improves adversarial training.
This paper gives an affirmative answer by employing the most recent diffusion model which has higher efficiency.
Our adversarially trained models achieve state-of-the-art performance on RobustBench using only generated data.
arXiv Detail & Related papers (2023-02-09T13:46:42Z) - Towards Alternative Techniques for Improving Adversarial Robustness:
Analysis of Adversarial Training at a Spectrum of Perturbations [5.18694590238069]
Adversarial training (AT) and its variants have spearheaded progress in improving neural network robustness to adversarial perturbations.
We focus on models, trained on a spectrum of $epsilon$ values.
We identify alternative improvements to AT that otherwise wouldn't have been apparent at a single $epsilon$.
arXiv Detail & Related papers (2022-06-13T22:01:21Z) - Data Augmentation Can Improve Robustness [21.485435979018256]
Adrial training suffers from robust overfitting, a phenomenon where the robust test accuracy starts to decrease during training.
We demonstrate that, when combined with model weight averaging, data augmentation can significantly boost robust accuracy.
In particular, against $ell_infty$ norm-bounded perturbations of size $epsilon = 8/255$, our model reaches 60.07% robust accuracy without using any external data.
arXiv Detail & Related papers (2021-11-09T18:57:00Z) - Improving Robustness using Generated Data [20.873767830152605]
generative models trained solely on the original training set can be leveraged to artificially increase the size of the original training set.
We show large absolute improvements in robust accuracy compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-10-18T17:00:26Z) - Guided Interpolation for Adversarial Training [73.91493448651306]
As training progresses, the training data becomes less and less attackable, undermining the robustness enhancement.
We propose the guided framework (GIF), which employs the previous epoch's meta information to guide the data's adversarial variants.
Compared with the vanilla mixup, the GIF can provide a higher ratio of attackable data, which is beneficial to the robustness enhancement.
arXiv Detail & Related papers (2021-02-15T03:55:08Z) - Uncovering the Limits of Adversarial Training against Norm-Bounded
Adversarial Examples [47.27255244183513]
We study the effect of different training losses, model sizes, activation functions, the addition of unlabeled data (through pseudo-labeling) and other factors on adversarial robustness.
We discover that it is possible to train robust models that go well beyond state-of-the-art results by combining larger models, Swish/SiLU activations and model weight averaging.
arXiv Detail & Related papers (2020-10-07T18:19:09Z) - Geometry-aware Instance-reweighted Adversarial Training [78.70024866515756]
In adversarial machine learning, there was a common belief that robustness and accuracy hurt each other.
We propose geometry-aware instance-reweighted adversarial training, where the weights are based on how difficult it is to attack a natural data point.
Experiments show that our proposal boosts the robustness of standard adversarial training.
arXiv Detail & Related papers (2020-10-05T01:33:11Z) - Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning [134.15174177472807]
We introduce adversarial training into self-supervision, to provide general-purpose robust pre-trained models for the first time.
We conduct extensive experiments to demonstrate that the proposed framework achieves large performance margins.
arXiv Detail & Related papers (2020-03-28T18:28:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.