Efficient and Effective Augmentation Strategy for Adversarial Training
- URL: http://arxiv.org/abs/2210.15318v1
- Date: Thu, 27 Oct 2022 10:59:55 GMT
- Title: Efficient and Effective Augmentation Strategy for Adversarial Training
- Authors: Sravanti Addepalli, Samyak Jain, R.Venkatesh Babu
- Abstract summary: Adversarial training of Deep Neural Networks is known to be significantly more data-hungry than standard training.
We propose Diverse Augmentation-based Joint Adversarial Training (DAJAT) to use data augmentations effectively in adversarial training.
- Score: 48.735220353660324
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial training of Deep Neural Networks is known to be significantly
more data-hungry when compared to standard training. Furthermore, complex data
augmentations such as AutoAugment, which have led to substantial gains in
standard training of image classifiers, have not been successful with
Adversarial Training. We first explain this contrasting behavior by viewing
augmentation during training as a problem of domain generalization, and further
propose Diverse Augmentation-based Joint Adversarial Training (DAJAT) to use
data augmentations effectively in adversarial training. We aim to handle the
conflicting goals of enhancing the diversity of the training dataset and
training with data that is close to the test distribution by using a
combination of simple and complex augmentations with separate batch
normalization layers during training. We further utilize the popular
Jensen-Shannon divergence loss to encourage the joint learning of the diverse
augmentations, thereby allowing simple augmentations to guide the learning of
complex ones. Lastly, to improve the computational efficiency of the proposed
method, we propose and utilize a two-step defense, Ascending Constraint
Adversarial Training (ACAT), that uses an increasing epsilon schedule and
weight-space smoothing to prevent gradient masking. The proposed method DAJAT
achieves substantially better robustness-accuracy trade-off when compared to
existing methods on the RobustBench Leaderboard on ResNet-18 and
WideResNet-34-10. The code for implementing DAJAT is available here:
https://github.com/val-iisc/DAJAT.
Related papers
- Small Dataset, Big Gains: Enhancing Reinforcement Learning by Offline
Pre-Training with Model Based Augmentation [59.899714450049494]
offline pre-training can produce sub-optimal policies and lead to degraded online reinforcement learning performance.
We propose a model-based data augmentation strategy to maximize the benefits of offline reinforcement learning pre-training and reduce the scale of data needed to be effective.
arXiv Detail & Related papers (2023-12-15T14:49:41Z) - Incorporating Supervised Domain Generalization into Data Augmentation [4.14360329494344]
We propose a method, contrastive semantic alignment(CSA) loss, to improve robustness and training efficiency of data augmentation.
Experiments on the CIFAR-100 and CUB datasets show that the proposed method improves the robustness and training efficiency of typical data augmentations.
arXiv Detail & Related papers (2023-10-02T09:20:12Z) - Efficient Augmentation for Imbalanced Deep Learning [8.38844520504124]
We study a convolutional neural network's internal representation of imbalanced image data.
We measure the generalization gap between a model's feature embeddings in the training and test sets, showing that the gap is wider for minority classes.
This insight enables us to design an efficient three-phase CNN training framework for imbalanced data.
arXiv Detail & Related papers (2022-07-13T09:43:17Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Adversarial Unlearning: Reducing Confidence Along Adversarial Directions [88.46039795134993]
We propose a complementary regularization strategy that reduces confidence on self-generated examples.
The method, which we call RCAD, aims to reduce confidence on out-of-distribution examples lying along directions adversarially chosen to increase training loss.
Despite its simplicity, we find on many classification benchmarks that RCAD can be added to existing techniques to increase test accuracy by 1-3% in absolute value.
arXiv Detail & Related papers (2022-06-03T02:26:24Z) - Sparsity Winning Twice: Better Robust Generalization from More Efficient
Training [94.92954973680914]
We introduce two alternatives for sparse adversarial training: (i) static sparsity and (ii) dynamic sparsity.
We find both methods to yield win-win: substantially shrinking the robust generalization gap and alleviating the robust overfitting.
Our approaches can be combined with existing regularizers, establishing new state-of-the-art results in adversarial training.
arXiv Detail & Related papers (2022-02-20T15:52:08Z) - Friendly Training: Neural Networks Can Adapt Data To Make Learning
Easier [23.886422706697882]
We propose a novel training procedure named Friendly Training.
We show that Friendly Training yields improvements with respect to informed data sub-selection and random selection.
Results suggest that adapting the input data is a feasible way to stabilize learning and improve the skills generalization of the network.
arXiv Detail & Related papers (2021-06-21T10:50:34Z) - Guided Interpolation for Adversarial Training [73.91493448651306]
As training progresses, the training data becomes less and less attackable, undermining the robustness enhancement.
We propose the guided framework (GIF), which employs the previous epoch's meta information to guide the data's adversarial variants.
Compared with the vanilla mixup, the GIF can provide a higher ratio of attackable data, which is beneficial to the robustness enhancement.
arXiv Detail & Related papers (2021-02-15T03:55:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.