Domain Invariant Representation Learning with Domain Density
Transformations
- URL: http://arxiv.org/abs/2102.05082v1
- Date: Tue, 9 Feb 2021 19:25:32 GMT
- Title: Domain Invariant Representation Learning with Domain Density
Transformations
- Authors: A. Tuan Nguyen, Toan Tran, Yarin Gal, Atilim Gunes Baydin
- Abstract summary: Domain generalization refers to the problem where we aim to train a model on data from a set of source domains so that the model can generalize to unseen target domains.
We show how to use generative adversarial networks to learn such domain transformations to implement our method in practice.
- Score: 30.29600757980369
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain generalization refers to the problem where we aim to train a model on
data from a set of source domains so that the model can generalize to unseen
target domains. Naively training a model on the aggregate set of data (pooled
from all source domains) has been shown to perform suboptimally, since the
information learned by that model might be domain-specific and generalize
imperfectly to target domains. To tackle this problem, a predominant approach
is to find and learn some domain-invariant information in order to use it for
the prediction task. In this paper, we propose a theoretically grounded method
to learn a domain-invariant representation by enforcing the representation
network to be invariant under all transformation functions among domains. We
also show how to use generative adversarial networks to learn such domain
transformations to implement our method in practice. We demonstrate the
effectiveness of our method on several widely used datasets for the domain
generalization problem, on all of which we achieve competitive results with
state-of-the-art models.
Related papers
- Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Cross-Domain Ensemble Distillation for Domain Generalization [17.575016642108253]
We propose a simple yet effective method for domain generalization, named cross-domain ensemble distillation (XDED)
Our method generates an ensemble of the output logits from training data with the same label but from different domains and then penalizes each output for the mismatch with the ensemble.
We show that models learned by our method are robust against adversarial attacks and image corruptions.
arXiv Detail & Related papers (2022-11-25T12:32:36Z) - TAL: Two-stream Adaptive Learning for Generalizable Person
Re-identification [115.31432027711202]
We argue that both domain-specific and domain-invariant features are crucial for improving the generalization ability of re-id models.
We name two-stream adaptive learning (TAL) to simultaneously model these two kinds of information.
Our framework can be applied to both single-source and multi-source domain generalization tasks.
arXiv Detail & Related papers (2021-11-29T01:27:42Z) - Towards Data-Free Domain Generalization [12.269045654957765]
How can knowledge contained in models trained on different source data domains be merged into a single model that generalizes well to unseen target domains?
Prior domain generalization methods typically rely on using source domain data, making them unsuitable for private decentralized data.
We propose DEKAN, an approach that extracts and fuses domain-specific knowledge from the available teacher models into a student model robust to domain shift.
arXiv Detail & Related papers (2021-10-09T11:44:05Z) - Adaptive Domain-Specific Normalization for Generalizable Person
Re-Identification [81.30327016286009]
We propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
In this work, we propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
arXiv Detail & Related papers (2021-05-07T02:54:55Z) - Adaptive Methods for Real-World Domain Generalization [32.030688845421594]
In our work, we investigate whether it is possible to leverage domain information from unseen test samples themselves.
We propose a domain-adaptive approach consisting of two steps: a) we first learn a discriminative domain embedding from unsupervised training examples, and b) use this domain embedding as supplementary information to build a domain-adaptive model.
Our approach achieves state-of-the-art performance on various domain generalization benchmarks.
arXiv Detail & Related papers (2021-03-29T17:44:35Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.