Improving Diversity with Adversarially Learned Transformations for
Domain Generalization
- URL: http://arxiv.org/abs/2206.07736v1
- Date: Wed, 15 Jun 2022 18:05:24 GMT
- Title: Improving Diversity with Adversarially Learned Transformations for
Domain Generalization
- Authors: Tejas Gokhale, Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya
Kailkhura, Chitta Baral, Yezhou Yang
- Abstract summary: We present a novel framework that uses adversarially learned transformations (ALT) using a neural network to model plausible, yet hard image transformations.
We show that ALT can naturally work with existing diversity modules to produce highly distinct, and large transformations of the source domain leading to state-of-the-art performance.
- Score: 81.26960899663601
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: To be successful in single source domain generalization, maximizing diversity
of synthesized domains has emerged as one of the most effective strategies.
Many of the recent successes have come from methods that pre-specify the types
of diversity that a model is exposed to during training, so that it can
ultimately generalize well to new domains. However, na\"ive diversity based
augmentations do not work effectively for domain generalization either because
they cannot model large domain shift, or because the span of transforms that
are pre-specified do not cover the types of shift commonly occurring in domain
generalization. To address this issue, we present a novel framework that uses
adversarially learned transformations (ALT) using a neural network to model
plausible, yet hard image transformations that fool the classifier. This
network is randomly initialized for each batch and trained for a fixed number
of steps to maximize classification error. Further, we enforce consistency
between the classifier's predictions on the clean and transformed images. With
extensive empirical analysis, we find that this new form of adversarial
transformations achieve both objectives of diversity and hardness
simultaneously, outperforming all existing techniques on competitive benchmarks
for single source domain generalization. We also show that ALT can naturally
work with existing diversity modules to produce highly distinct, and large
transformations of the source domain leading to state-of-the-art performance.
Related papers
- Generalize or Detect? Towards Robust Semantic Segmentation Under Multiple Distribution Shifts [56.57141696245328]
In open-world scenarios, where both novel classes and domains may exist, an ideal segmentation model should detect anomaly classes for safety.
Existing methods often struggle to distinguish between domain-level and semantic-level distribution shifts.
arXiv Detail & Related papers (2024-11-06T11:03:02Z) - Learning to Generalize Unseen Domains via Multi-Source Meta Learning for Text Classification [71.08024880298613]
We study the multi-source Domain Generalization of text classification.
We propose a framework to use multiple seen domains to train a model that can achieve high accuracy in an unseen domain.
arXiv Detail & Related papers (2024-09-20T07:46:21Z) - Causality-inspired Latent Feature Augmentation for Single Domain Generalization [13.735443005394773]
Single domain generalization (Single-DG) intends to develop a generalizable model with only one single training domain to perform well on other unknown target domains.
Under the domain-hungry configuration, how to expand the coverage of source domain and find intrinsic causal features across different distributions is the key to enhancing the models' generalization ability.
We propose a novel causality-inspired latent feature augmentation method for Single-DG by learning the meta-knowledge of feature-level transformation based on causal learning and interventions.
arXiv Detail & Related papers (2024-06-10T02:42:25Z) - Cross-Domain Feature Augmentation for Domain Generalization [16.174824932970004]
We propose a cross-domain feature augmentation method named XDomainMix.
Experiments on widely used benchmark datasets demonstrate that our proposed method is able to achieve state-of-the-art performance.
arXiv Detail & Related papers (2024-05-14T13:24:19Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - Improving Generalization with Domain Convex Game [32.07275105040802]
Domain generalization tends to alleviate the poor generalization capability of deep neural networks by learning model with multiple source domains.
A classical solution to DG is domain augmentation, the common belief of which is that diversifying source domains will be conducive to the out-of-distribution generalization.
Our explorations reveal that the correlation between model generalization and the diversity of domains may be not strictly positive, which limits the effectiveness of domain augmentation.
arXiv Detail & Related papers (2023-03-23T14:27:49Z) - Normalization Perturbation: A Simple Domain Generalization Method for
Real-World Domain Shifts [133.99270341855728]
Real-world domain styles can vary substantially due to environment changes and sensor noises.
Deep models only know the training domain style.
We propose Normalization Perturbation to overcome this domain style overfitting problem.
arXiv Detail & Related papers (2022-11-08T17:36:49Z) - Feature-based Style Randomization for Domain Generalization [27.15070576861912]
Domain generalization (DG) aims to first learn a generic model on multiple source domains and then directly generalize to an arbitrary unseen target domain without any additional adaptions.
This paper develops a simple yet effective feature-based style randomization module to achieve feature-level augmentation.
Compared with existing image-level augmentation, our feature-level augmentation favors a more goal-oriented and sample-diverse way.
arXiv Detail & Related papers (2021-06-06T16:34:44Z) - Learning to Generate Novel Domains for Domain Generalization [115.21519842245752]
This paper focuses on the task of learning from multiple source domains a model that generalizes well to unseen domains.
We employ a data generator to synthesize data from pseudo-novel domains to augment the source domains.
Our method, L2A-OT, outperforms current state-of-the-art DG methods on four benchmark datasets.
arXiv Detail & Related papers (2020-07-07T09:34:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.