Adversarial and Random Transformations for Robust Domain Adaptation and
Generalization
- URL: http://arxiv.org/abs/2211.06788v1
- Date: Sun, 13 Nov 2022 02:10:13 GMT
- Title: Adversarial and Random Transformations for Robust Domain Adaptation and
Generalization
- Authors: Liang Xiao, Jiaolong Xu, Dawei Zhao, Erke Shang, Qi Zhu, Bin Dai
- Abstract summary: We show that by simply applying consistency training with random data augmentation, state-of-the-art results on domain adaptation (DA) and generalization (DG) can be obtained.
The combined adversarial and random transformations based method outperforms the state-of-the-art on multiple DA and DG benchmark datasets.
- Score: 9.995765847080596
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data augmentation has been widely used to improve generalization in training
deep neural networks. Recent works show that using worst-case transformations
or adversarial augmentation strategies can significantly improve the accuracy
and robustness. However, due to the non-differentiable properties of image
transformations, searching algorithms such as reinforcement learning or
evolution strategy have to be applied, which are not computationally practical
for large scale problems. In this work, we show that by simply applying
consistency training with random data augmentation, state-of-the-art results on
domain adaptation (DA) and generalization (DG) can be obtained. To further
improve the accuracy and robustness with adversarial examples, we propose a
differentiable adversarial data augmentation method based on spatial
transformer networks (STN). The combined adversarial and random transformations
based method outperforms the state-of-the-art on multiple DA and DG benchmark
datasets. Besides, the proposed method shows desirable robustness to
corruption, which is also validated on commonly used datasets.
Related papers
- GE-AdvGAN: Improving the transferability of adversarial samples by
gradient editing-based adversarial generative model [69.71629949747884]
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data.
In this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples.
arXiv Detail & Related papers (2024-01-11T16:43:16Z) - AdvST: Revisiting Data Augmentations for Single Domain Generalization [39.55487584183931]
Single domain generalization aims to train a robust model against unknown target domain shifts using data from a single source domain.
Standard data augmentations with learnable parameters as semantics transformations can manipulate certain semantics of a sample.
We propose Adversarial learning with Semantics Transformations (AdvST) that augments the source domain data with semantics transformations and learns a robust model with the augmented data.
arXiv Detail & Related papers (2023-12-20T02:29:31Z) - Incorporating Supervised Domain Generalization into Data Augmentation [4.14360329494344]
We propose a method, contrastive semantic alignment(CSA) loss, to improve robustness and training efficiency of data augmentation.
Experiments on the CIFAR-100 and CUB datasets show that the proposed method improves the robustness and training efficiency of typical data augmentations.
arXiv Detail & Related papers (2023-10-02T09:20:12Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Single Domain Generalization via Normalised Cross-correlation Based
Convolutions [14.306250516592304]
Single Domain Generalization aims to train robust models using data from a single source.
We propose a novel operator called XCNorm that computes the normalized cross-correlation between weights and an input feature patch.
We show that deep neural networks composed of this operator are robust to common semantic distribution shifts.
arXiv Detail & Related papers (2023-07-12T04:15:36Z) - Improving Diversity with Adversarially Learned Transformations for
Domain Generalization [81.26960899663601]
We present a novel framework that uses adversarially learned transformations (ALT) using a neural network to model plausible, yet hard image transformations.
We show that ALT can naturally work with existing diversity modules to produce highly distinct, and large transformations of the source domain leading to state-of-the-art performance.
arXiv Detail & Related papers (2022-06-15T18:05:24Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z) - Exploring Data Aggregation and Transformations to Generalize across
Visual Domains [0.0]
This thesis contributes to research on Domain Generalization (DG), Domain Adaptation (DA) and their variations.
We propose new frameworks for Domain Generalization and Domain Adaptation which make use of feature aggregation strategies and visual transformations.
We show how our proposed solutions outperform competitive state-of-the-art approaches in established DG and DA benchmarks.
arXiv Detail & Related papers (2021-08-20T14:58:14Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.