Learning to Generate Novel Domains for Domain Generalization
- URL: http://arxiv.org/abs/2007.03304v3
- Date: Tue, 9 Mar 2021 11:50:54 GMT
- Title: Learning to Generate Novel Domains for Domain Generalization
- Authors: Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, Tao Xiang
- Abstract summary: This paper focuses on the task of learning from multiple source domains a model that generalizes well to unseen domains.
We employ a data generator to synthesize data from pseudo-novel domains to augment the source domains.
Our method, L2A-OT, outperforms current state-of-the-art DG methods on four benchmark datasets.
- Score: 115.21519842245752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on domain generalization (DG), the task of learning from
multiple source domains a model that generalizes well to unseen domains. A main
challenge for DG is that the available source domains often exhibit limited
diversity, hampering the model's ability to learn to generalize. We therefore
employ a data generator to synthesize data from pseudo-novel domains to augment
the source domains. This explicitly increases the diversity of available
training domains and leads to a more generalizable model. To train the
generator, we model the distribution divergence between source and synthesized
pseudo-novel domains using optimal transport, and maximize the divergence. To
ensure that semantics are preserved in the synthesized data, we further impose
cycle-consistency and classification losses on the generator. Our method,
L2A-OT (Learning to Augment by Optimal Transport) outperforms current
state-of-the-art DG methods on four benchmark datasets.
Related papers
- Domain Expansion and Boundary Growth for Open-Set Single-Source Domain Generalization [70.02187124865627]
Open-set single-source domain generalization aims to use a single-source domain to learn a robust model that can be generalized to unknown target domains.
We propose a novel learning approach based on domain expansion and boundary growth to expand the scarce source samples.
Our approach can achieve significant improvements and reach state-of-the-art performance on several cross-domain image classification datasets.
arXiv Detail & Related papers (2024-11-05T09:08:46Z) - Quantitatively Measuring and Contrastively Exploring Heterogeneity for
Domain Generalization [38.50749918578154]
We propose Heterogeneity-based Two-stage Contrastive Learning (HTCL) for the Domain generalization task.
In the first stage, we generate the most heterogeneous dividing pattern with our contrastive metric.
In the second stage, we employ an in-aimed contrastive learning by re-building pairs with the stable relation hinted by domains and classes.
arXiv Detail & Related papers (2023-05-25T09:42:43Z) - Improving Generalization with Domain Convex Game [32.07275105040802]
Domain generalization tends to alleviate the poor generalization capability of deep neural networks by learning model with multiple source domains.
A classical solution to DG is domain augmentation, the common belief of which is that diversifying source domains will be conducive to the out-of-distribution generalization.
Our explorations reveal that the correlation between model generalization and the diversity of domains may be not strictly positive, which limits the effectiveness of domain augmentation.
arXiv Detail & Related papers (2023-03-23T14:27:49Z) - Learning to Augment via Implicit Differentiation for Domain
Generalization [107.9666735637355]
Domain generalization (DG) aims to overcome the problem by leveraging multiple source domains to learn a domain-generalizable model.
In this paper, we propose a novel augmentation-based DG approach, dubbed AugLearn.
AugLearn shows effectiveness on three standard DG benchmarks, PACS, Office-Home and Digits-DG.
arXiv Detail & Related papers (2022-10-25T18:51:51Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Domain Generalization by Mutual-Information Regularization with
Pre-trained Models [20.53534134966378]
Domain generalization (DG) aims to learn a generalized model to an unseen target domain using only limited source domains.
We re-formulate the DG objective using mutual information with the oracle model, a model generalized to any possible domain.
Our experiments show that Mutual Information Regularization with Oracle (MIRO) significantly improves the out-of-distribution performance.
arXiv Detail & Related papers (2022-03-21T08:07:46Z) - Learning to Diversify for Single Domain Generalization [46.35670520201863]
Domain generalization (DG) aims to generalize a model trained on multiple source (i.e., training) domains to a distributionally different target (i.e., test) domain.
This paper considers a more realistic yet challenging scenario, namely Single Domain Generalization (Single-DG), where only one source domain is available for training.
In this scenario, the limited diversity may jeopardize the model generalization on unseen target domains.
We propose a style-complement module to enhance the generalization power of the model by synthesizing images from diverse distributions that are complementary to the source ones.
arXiv Detail & Related papers (2021-08-26T12:04:32Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z) - Deep Domain-Adversarial Image Generation for Domain Generalisation [115.21519842245752]
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution.
To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains.
We propose a novel DG approach based on emphDeep Domain-Adversarial Image Generation (DDAIG)
arXiv Detail & Related papers (2020-03-12T23:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.