Incorporating Supervised Domain Generalization into Data Augmentation
- URL: http://arxiv.org/abs/2310.01029v1
- Date: Mon, 2 Oct 2023 09:20:12 GMT
- Title: Incorporating Supervised Domain Generalization into Data Augmentation
- Authors: Shohei Enomoto, Monikka Roslianna Busto, Takeharu Eda
- Abstract summary: We propose a method, contrastive semantic alignment(CSA) loss, to improve robustness and training efficiency of data augmentation.
Experiments on the CIFAR-100 and CUB datasets show that the proposed method improves the robustness and training efficiency of typical data augmentations.
- Score: 4.14360329494344
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing utilization of deep learning in outdoor settings, its
robustness needs to be enhanced to preserve accuracy in the face of
distribution shifts, such as compression artifacts. Data augmentation is a
widely used technique to improve robustness, thanks to its ease of use and
numerous benefits. However, it requires more training epochs, making it
difficult to train large models with limited computational resources. To
address this problem, we treat data augmentation as supervised domain
generalization~(SDG) and benefit from the SDG method, contrastive semantic
alignment~(CSA) loss, to improve the robustness and training efficiency of data
augmentation. The proposed method only adds loss during model training and can
be used as a plug-in for existing data augmentation methods. Experiments on the
CIFAR-100 and CUB datasets show that the proposed method improves the
robustness and training efficiency of typical data augmentations.
Related papers
- AdaAugment: A Tuning-Free and Adaptive Approach to Enhance Data Augmentation [12.697608744311122]
AdaAugment is a tuning-free Adaptive Augmentation method.
It dynamically adjusts augmentation magnitudes for individual training samples based on real-time feedback from the target network.
It consistently outperforms other state-of-the-art DA methods in effectiveness while maintaining remarkable efficiency.
arXiv Detail & Related papers (2024-05-19T06:54:03Z) - DualAug: Exploiting Additional Heavy Augmentation with OOD Data
Rejection [77.6648187359111]
We propose a novel data augmentation method, named textbfDualAug, to keep the augmentation in distribution as much as possible at a reasonable time and computational cost.
Experiments on supervised image classification benchmarks show that DualAug improve various automated data augmentation method.
arXiv Detail & Related papers (2023-10-12T08:55:10Z) - Adversarial and Random Transformations for Robust Domain Adaptation and
Generalization [9.995765847080596]
We show that by simply applying consistency training with random data augmentation, state-of-the-art results on domain adaptation (DA) and generalization (DG) can be obtained.
The combined adversarial and random transformations based method outperforms the state-of-the-art on multiple DA and DG benchmark datasets.
arXiv Detail & Related papers (2022-11-13T02:10:13Z) - Efficient and Effective Augmentation Strategy for Adversarial Training [48.735220353660324]
Adversarial training of Deep Neural Networks is known to be significantly more data-hungry than standard training.
We propose Diverse Augmentation-based Joint Adversarial Training (DAJAT) to use data augmentations effectively in adversarial training.
arXiv Detail & Related papers (2022-10-27T10:59:55Z) - Augmentation-Aware Self-Supervision for Data-Efficient GAN Training [68.81471633374393]
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.
We propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data.
We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures.
arXiv Detail & Related papers (2022-05-31T10:35:55Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Time Matters in Using Data Augmentation for Vision-based Deep
Reinforcement Learning [4.921588282642753]
The timing of using augmentation, which is, in turn, critical depending on tasks to be solved in training and testing.
If the regularization imposed by augmentation is helpful only in testing, it is better to procrastinate the augmentation after training than to use it during training in terms of sample and computation complexity.
arXiv Detail & Related papers (2021-02-17T05:22:34Z) - Generalization in Reinforcement Learning by Soft Data Augmentation [11.752595047069505]
SOft Data Augmentation (SODA) is a method that decouples augmentation from policy learning.
We find SODA to significantly advance sample efficiency, generalization, and stability in training over state-of-the-art vision-based RL methods.
arXiv Detail & Related papers (2020-11-26T17:00:34Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Generative Data Augmentation for Commonsense Reasoning [75.26876609249197]
G-DAUGC is a novel generative data augmentation method that aims to achieve more accurate and robust learning in the low-resource setting.
G-DAUGC consistently outperforms existing data augmentation methods based on back-translation.
Our analysis demonstrates that G-DAUGC produces a diverse set of fluent training examples, and that its selection and training approaches are important for performance.
arXiv Detail & Related papers (2020-04-24T06:12:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.