Data Augmentation Can Improve Robustness
- URL: http://arxiv.org/abs/2111.05328v1
- Date: Tue, 9 Nov 2021 18:57:00 GMT
- Title: Data Augmentation Can Improve Robustness
- Authors: Sylvestre-Alvise Rebuffi, Sven Gowal, Dan A. Calian, Florian Stimberg,
Olivia Wiles, Timothy Mann
- Abstract summary: Adrial training suffers from robust overfitting, a phenomenon where the robust test accuracy starts to decrease during training.
We demonstrate that, when combined with model weight averaging, data augmentation can significantly boost robust accuracy.
In particular, against $ell_infty$ norm-bounded perturbations of size $epsilon = 8/255$, our model reaches 60.07% robust accuracy without using any external data.
- Score: 21.485435979018256
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training suffers from robust overfitting, a phenomenon where the
robust test accuracy starts to decrease during training. In this paper, we
focus on reducing robust overfitting by using common data augmentation schemes.
We demonstrate that, contrary to previous findings, when combined with model
weight averaging, data augmentation can significantly boost robust accuracy.
Furthermore, we compare various augmentations techniques and observe that
spatial composition techniques work the best for adversarial training. Finally,
we evaluate our approach on CIFAR-10 against $\ell_\infty$ and $\ell_2$
norm-bounded perturbations of size $\epsilon = 8/255$ and $\epsilon = 128/255$,
respectively. We show large absolute improvements of +2.93% and +2.16% in
robust accuracy compared to previous state-of-the-art methods. In particular,
against $\ell_\infty$ norm-bounded perturbations of size $\epsilon = 8/255$,
our model reaches 60.07% robust accuracy without using any external data. We
also achieve a significant performance boost with this approach while using
other architectures and datasets such as CIFAR-100, SVHN and TinyImageNet.
Related papers
- PUMA: margin-based data pruning [51.12154122266251]
We focus on data pruning, where some training samples are removed based on the distance to the model classification boundary (i.e., margin)
We propose PUMA, a new data pruning strategy that computes the margin using DeepFool.
We show that PUMA can be used on top of the current state-of-the-art methodology in robustness, and it is able to significantly improve the model performance unlike the existing data pruning strategies.
arXiv Detail & Related papers (2024-05-10T08:02:20Z) - Breaking Boundaries: Balancing Performance and Robustness in Deep
Wireless Traffic Forecasting [11.029214459961114]
Balancing the trade-off between accuracy and robustness is a long-standing challenge in time series forecasting.
We study a wide array of perturbation scenarios and propose novel defense mechanisms against adversarial attacks using real-world telecom data.
arXiv Detail & Related papers (2023-11-16T11:10:38Z) - Better Diffusion Models Further Improve Adversarial Training [97.44991845907708]
It has been recognized that the data generated by the diffusion probabilistic model (DDPM) improves adversarial training.
This paper gives an affirmative answer by employing the most recent diffusion model which has higher efficiency.
Our adversarially trained models achieve state-of-the-art performance on RobustBench using only generated data.
arXiv Detail & Related papers (2023-02-09T13:46:42Z) - Towards Alternative Techniques for Improving Adversarial Robustness:
Analysis of Adversarial Training at a Spectrum of Perturbations [5.18694590238069]
Adversarial training (AT) and its variants have spearheaded progress in improving neural network robustness to adversarial perturbations.
We focus on models, trained on a spectrum of $epsilon$ values.
We identify alternative improvements to AT that otherwise wouldn't have been apparent at a single $epsilon$.
arXiv Detail & Related papers (2022-06-13T22:01:21Z) - SmoothNets: Optimizing CNN architecture design for differentially
private deep learning [69.10072367807095]
DPSGD requires clipping and noising of per-sample gradients.
This introduces a reduction in model utility compared to non-private training.
We distilled a new model architecture termed SmoothNet, which is characterised by increased robustness to the challenges of DP-SGD training.
arXiv Detail & Related papers (2022-05-09T07:51:54Z) - Sparsity Winning Twice: Better Robust Generalization from More Efficient
Training [94.92954973680914]
We introduce two alternatives for sparse adversarial training: (i) static sparsity and (ii) dynamic sparsity.
We find both methods to yield win-win: substantially shrinking the robust generalization gap and alleviating the robust overfitting.
Our approaches can be combined with existing regularizers, establishing new state-of-the-art results in adversarial training.
arXiv Detail & Related papers (2022-02-20T15:52:08Z) - Improving Robustness using Generated Data [20.873767830152605]
generative models trained solely on the original training set can be leveraged to artificially increase the size of the original training set.
We show large absolute improvements in robust accuracy compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-10-18T17:00:26Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Fixing Data Augmentation to Improve Adversarial Robustness [21.485435979018256]
Adversarial training suffers from robust overfitting, a phenomenon where the robust test accuracy starts to decrease during training.
In this paper, we focus on both adversarials-driven and data-driven augmentations as a means to reduce robust overfitting.
We show large absolute improvements of +7.06% and +5.88% in robust accuracy compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-02T18:58:33Z) - Uncovering the Limits of Adversarial Training against Norm-Bounded
Adversarial Examples [47.27255244183513]
We study the effect of different training losses, model sizes, activation functions, the addition of unlabeled data (through pseudo-labeling) and other factors on adversarial robustness.
We discover that it is possible to train robust models that go well beyond state-of-the-art results by combining larger models, Swish/SiLU activations and model weight averaging.
arXiv Detail & Related papers (2020-10-07T18:19:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.