Multi-Source Unsupervised Domain Adaptation via Pseudo Target Domain
- URL: http://arxiv.org/abs/2202.10725v1
- Date: Tue, 22 Feb 2022 08:37:16 GMT
- Title: Multi-Source Unsupervised Domain Adaptation via Pseudo Target Domain
- Authors: Ren Chuan-Xian, Liu Yong-Hui, Zhang Xi-Wen, Huang Ke-Kun
- Abstract summary: Multi-source domain adaptation (MDA) aims to transfer knowledge from multiple source domains to an unlabeled target domain.
We propose a novel MDA approach, termed Pseudo Target for MDA (PTMDA)
PTMDA maps each group of source and target domains into a group-specific subspace using adversarial learning with a metric constraint.
We show that PTMDA as a whole can reduce the target error bound and leads to a better approximation of the target risk in MDA settings.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-source domain adaptation (MDA) aims to transfer knowledge from multiple
source domains to an unlabeled target domain. MDA is a challenging task due to
the severe domain shift, which not only exists between target and source but
also exists among diverse sources. Prior studies on MDA either estimate a mixed
distribution of source domains or combine multiple single-source models, but
few of them delve into the relevant information among diverse source domains.
For this reason, we propose a novel MDA approach, termed Pseudo Target for MDA
(PTMDA). Specifically, PTMDA maps each group of source and target domains into
a group-specific subspace using adversarial learning with a metric constraint,
and constructs a series of pseudo target domains correspondingly. Then we align
the remainder source domains with the pseudo target domain in the subspace
efficiently, which allows to exploit additional structured source information
through the training on pseudo target domain and improves the performance on
the real target domain. Besides, to improve the transferability of deep neural
networks (DNNs), we replace the traditional batch normalization layer with an
effective matching normalization layer, which enforces alignments in latent
layers of DNNs and thus gains further promotion. We give theoretical analysis
showing that PTMDA as a whole can reduce the target error bound and leads to a
better approximation of the target risk in MDA settings. Extensive experiments
demonstrate PTMDA's effectiveness on MDA tasks, as it outperforms
state-of-the-art methods in most experimental settings.
Related papers
- More is Better: Deep Domain Adaptation with Multiple Sources [34.26271755493111]
Multi-source domain adaptation (MDA) is a powerful and practical extension in which the labeled data may be collected from multiple sources with different distributions.
In this survey, we first define various MDA strategies. Then we systematically summarize and compare modern MDA methods in the deep learning era from different perspectives.
arXiv Detail & Related papers (2024-05-01T03:37:12Z) - Subject-Based Domain Adaptation for Facial Expression Recognition [51.10374151948157]
Adapting a deep learning model to a specific target individual is a challenging facial expression recognition task.
This paper introduces a new MSDA method for subject-based domain adaptation in FER.
It efficiently leverages information from multiple source subjects to adapt a deep FER model to a single target individual.
arXiv Detail & Related papers (2023-12-09T18:40:37Z) - Dynamic Domain Discrepancy Adjustment for Active Multi-Domain Adaptation [3.367755441623275]
Multi-source unsupervised domain adaptation (MUDA) aims to transfer knowledge from related source domains to an unlabeled target domain.
We propose a novel approach called Dynamic Domain Discrepancy Adjustment for Active Multi-Domain Adaptation (D3AAMDA)
This mechanism controls the alignment level of features between each source domain and the target domain, effectively leveraging the local advantageous feature information within the source domains.
arXiv Detail & Related papers (2023-07-26T09:40:19Z) - Learning Feature Decomposition for Domain Adaptive Monocular Depth
Estimation [51.15061013818216]
Supervised approaches have led to great success with the advance of deep learning, but they rely on large quantities of ground-truth depth annotations.
Unsupervised domain adaptation (UDA) transfers knowledge from labeled source data to unlabeled target data, so as to relax the constraint of supervised learning.
We propose a novel UDA method for MDE, referred to as Learning Feature Decomposition for Adaptation (LFDA), which learns to decompose the feature space into content and style components.
arXiv Detail & Related papers (2022-07-30T08:05:35Z) - T-SVDNet: Exploring High-Order Prototypical Correlations for
Multi-Source Domain Adaptation [41.356774580308986]
We propose a novel approach named T-SVDNet to address the task of Multi-source Domain Adaptation.
High-order correlations among multiple domains and categories are fully explored so as to better bridge the domain gap.
To avoid negative transfer brought by noisy source data, we propose a novel uncertainty-aware weighting strategy.
arXiv Detail & Related papers (2021-07-30T06:33:05Z) - Domain Consistency Regularization for Unsupervised Multi-source Domain
Adaptive Classification [57.92800886719651]
Deep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years.
domain shift in MUDA exists not only between the source and target domains but also among multiple source domains.
We propose an end-to-end trainable network that exploits domain Consistency Regularization for unsupervised Multi-source domain Adaptive classification.
arXiv Detail & Related papers (2021-06-16T07:29:27Z) - Mutual Learning Network for Multi-Source Domain Adaptation [73.25974539191553]
We propose a novel multi-source domain adaptation method, Mutual Learning Network for Multiple Source Domain Adaptation (ML-MSDA)
Under the framework of mutual learning, the proposed method pairs the target domain with each single source domain to train a conditional adversarial domain adaptation network as a branch network.
The proposed method outperforms the comparison methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-03-29T04:31:43Z) - Multi-source Domain Adaptation in the Deep Learning Era: A Systematic
Survey [53.656086832255944]
Multi-source domain adaptation (MDA) is a powerful extension in which the labeled data may be collected from multiple sources.
MDA has attracted increasing attention in both academia and industry.
arXiv Detail & Related papers (2020-02-26T08:07:58Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.