Learning Transferable Parameters for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2108.06129v1
- Date: Fri, 13 Aug 2021 09:09:15 GMT
- Title: Learning Transferable Parameters for Unsupervised Domain Adaptation
- Authors: Zhongyi Han, Haoliang Sun, Yilong Yin
- Abstract summary: Untrivial domain adaptation (UDA) enables a learning machine to adapt from a labeled source domain to an unlabeled domain under the distribution shift.
We propose Transferable Learning (TransPar) to reduce the side effect brought by domain-specific information in the learning process.
- Score: 29.962241958947306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (UDA) enables a learning machine to adapt from
a labeled source domain to an unlabeled domain under the distribution shift.
Thanks to the strong representation ability of deep neural networks, recent
remarkable achievements in UDA resort to learning domain-invariant features.
Intuitively, the hope is that a good feature representation, together with the
hypothesis learned from the source domain, can generalize well to the target
domain. However, the learning processes of domain-invariant features and source
hypothesis inevitably involve domain-specific information that would degrade
the generalizability of UDA models on the target domain. In this paper,
motivated by the lottery ticket hypothesis that only partial parameters are
essential for generalization, we find that only partial parameters are
essential for learning domain-invariant information and generalizing well in
UDA. Such parameters are termed transferable parameters. In contrast, the other
parameters tend to fit domain-specific details and often fail to generalize,
which we term as untransferable parameters. Driven by this insight, we propose
Transferable Parameter Learning (TransPar) to reduce the side effect brought by
domain-specific information in the learning process and thus enhance the
memorization of domain-invariant information. Specifically, according to the
distribution discrepancy degree, we divide all parameters into transferable and
untransferable ones in each training iteration. We then perform separate
updates rules for the two types of parameters. Extensive experiments on image
classification and regression tasks (keypoint detection) show that TransPar
outperforms prior arts by non-trivial margins. Moreover, experiments
demonstrate that TransPar can be integrated into the most popular deep UDA
networks and be easily extended to handle any data distribution shift
scenarios.
Related papers
- Bayesian Domain Invariant Learning via Posterior Generalization of
Parameter Distributions [29.018103152856792]
PosTerior Generalization (PTG) shows competitive performance on various domain generalization benchmarks on DomainBed.
PTG fully exploits variational inference to approximate parameter distributions, including the invariant posterior and the posteriors on training domains.
arXiv Detail & Related papers (2023-10-25T01:17:08Z) - Learning to Learn Domain-invariant Parameters for Domain Generalization [29.821634033299855]
Domain generalization (DG) aims to overcome this issue by capturing domain-invariant representations from source domains.
We propose two modules of Domain Decoupling and Combination (DDC) and Domain-invariance-guided Backpropagation (DIGB)
Our proposed method has achieved state-of-the-art performance with strong generalization capability.
arXiv Detail & Related papers (2022-11-04T07:19:34Z) - Domain-invariant Feature Exploration for Domain Generalization [35.99082628524934]
We argue that domain-invariant features should be originating from both internal and mutual sides.
We propose DIFEX for Domain-Invariant Feature EXploration.
Experiments on both time-series and visual benchmarks demonstrate that the proposed DIFEX achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-07-25T09:55:55Z) - Domain-Agnostic Prior for Transfer Semantic Segmentation [197.9378107222422]
Unsupervised domain adaptation (UDA) is an important topic in the computer vision community.
We present a mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP)
Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
arXiv Detail & Related papers (2022-04-06T09:13:25Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Instrumental Variable-Driven Domain Generalization with Unobserved
Confounders [53.735614014067394]
Domain generalization (DG) aims to learn from multiple source domains a model that can generalize well on unseen target domains.
We propose an instrumental variable-driven DG method (IV-DG) by removing the bias of the unobserved confounders with two-stage learning.
In the first stage, it learns the conditional distribution of the input features of one domain given input features of another domain.
In the second stage, it estimates the relationship by predicting labels with the learned conditional distribution.
arXiv Detail & Related papers (2021-10-04T13:32:57Z) - Quantifying and Improving Transferability in Domain Generalization [53.16289325326505]
Out-of-distribution generalization is one of the key challenges when transferring a model from the lab to the real world.
We formally define transferability that one can quantify and compute in domain generalization.
We propose a new algorithm for learning transferable features and test it over various benchmark datasets.
arXiv Detail & Related papers (2021-06-07T14:04:32Z) - Heuristic Domain Adaptation [105.59792285047536]
Heuristic Domain Adaptation Network (HDAN) explicitly learns the domain-invariant and domain-specific representations.
Heuristic Domain Adaptation Network (HDAN) has exceeded state-of-the-art on unsupervised DA, multi-source DA and semi-supervised DA.
arXiv Detail & Related papers (2020-11-30T04:21:35Z) - Interventional Domain Adaptation [81.0692660794765]
Domain adaptation (DA) aims to transfer discriminative features learned from source domain to target domain.
Standard domain-invariance learning suffers from spurious correlations and incorrectly transfers the source-specifics.
We create counterfactual features that distinguish the domain-specifics from domain-sharable part.
arXiv Detail & Related papers (2020-11-07T09:53:13Z) - Respecting Domain Relations: Hypothesis Invariance for Domain
Generalization [30.14312814723027]
In domain generalization, multiple labeled non-independent and non-identically distributed source domains are available during training.
Currently, learning so-called domain invariant representations (DIRs) is the prevalent approach to domain generalization.
arXiv Detail & Related papers (2020-10-15T08:26:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.