Respecting Domain Relations: Hypothesis Invariance for Domain
Generalization
- URL: http://arxiv.org/abs/2010.07591v1
- Date: Thu, 15 Oct 2020 08:26:08 GMT
- Title: Respecting Domain Relations: Hypothesis Invariance for Domain
Generalization
- Authors: Ziqi Wang, Marco Loog, Jan van Gemert
- Abstract summary: In domain generalization, multiple labeled non-independent and non-identically distributed source domains are available during training.
Currently, learning so-called domain invariant representations (DIRs) is the prevalent approach to domain generalization.
- Score: 30.14312814723027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In domain generalization, multiple labeled non-independent and
non-identically distributed source domains are available during training while
neither the data nor the labels of target domains are. Currently, learning
so-called domain invariant representations (DIRs) is the prevalent approach to
domain generalization. In this work, we define DIRs employed by existing works
in probabilistic terms and show that by learning DIRs, overly strict
requirements are imposed concerning the invariance. Particularly, DIRs aim to
perfectly align representations of different domains, i.e. their input
distributions. This is, however, not necessary for good generalization to a
target domain and may even dispose of valuable classification information. We
propose to learn so-called hypothesis invariant representations (HIRs), which
relax the invariance assumptions by merely aligning posteriors, instead of
aligning representations. We report experimental results on public domain
generalization datasets to show that learning HIRs is more effective than
learning DIRs. In fact, our approach can even compete with approaches using
prior knowledge about domains.
Related papers
- Non-stationary Domain Generalization: Theory and Algorithm [11.781050299571692]
In this paper, we study domain generalization in non-stationary environment.
We first examine the impact of environmental non-stationarity on model performance.
Then, we propose a novel algorithm based on adaptive invariant representation learning.
arXiv Detail & Related papers (2024-05-10T21:32:43Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised
Domain Adaptation [88.5448806952394]
We consider unsupervised domain adaptation (UDA), where labeled data from a source domain and unlabeled data from a target domain are used to learn a classifier for the target domain.
We show that contrastive pre-training, which learns features on unlabeled source and target data and then fine-tunes on labeled source data, is competitive with strong UDA methods.
arXiv Detail & Related papers (2022-04-01T16:56:26Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Discriminative Domain-Invariant Adversarial Network for Deep Domain
Generalization [33.84004077585957]
We propose a discriminative domain-invariant adversarial network (DDIAN) for domain generalization.
DDIAN achieves better prediction on unseen target data during training compared to state-of-the-art domain generalization approaches.
arXiv Detail & Related papers (2021-08-20T04:24:12Z) - Learning Transferable Parameters for Unsupervised Domain Adaptation [29.962241958947306]
Untrivial domain adaptation (UDA) enables a learning machine to adapt from a labeled source domain to an unlabeled domain under the distribution shift.
We propose Transferable Learning (TransPar) to reduce the side effect brought by domain-specific information in the learning process.
arXiv Detail & Related papers (2021-08-13T09:09:15Z) - f-Domain-Adversarial Learning: Theory and Algorithms [82.97698406515667]
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain.
We derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences.
arXiv Detail & Related papers (2021-06-21T18:21:09Z) - A Bit More Bayesian: Domain-Invariant Learning with Uncertainty [111.22588110362705]
Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data.
In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference.
We derive domain-invariant representations and classifiers, which are jointly established in a two-layer Bayesian neural network.
arXiv Detail & Related papers (2021-05-09T21:33:27Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.