T-SVDNet: Exploring High-Order Prototypical Correlations for
Multi-Source Domain Adaptation
- URL: http://arxiv.org/abs/2107.14447v1
- Date: Fri, 30 Jul 2021 06:33:05 GMT
- Title: T-SVDNet: Exploring High-Order Prototypical Correlations for
Multi-Source Domain Adaptation
- Authors: Ruihuang Li, Xu Jia, Jianzhong He, Shuaijun Chen, Qinghua Hu
- Abstract summary: We propose a novel approach named T-SVDNet to address the task of Multi-source Domain Adaptation.
High-order correlations among multiple domains and categories are fully explored so as to better bridge the domain gap.
To avoid negative transfer brought by noisy source data, we propose a novel uncertainty-aware weighting strategy.
- Score: 41.356774580308986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing domain adaptation methods focus on adaptation from only one
source domain, however, in practice there are a number of relevant sources that
could be leveraged to help improve performance on target domain. We propose a
novel approach named T-SVDNet to address the task of Multi-source Domain
Adaptation (MDA), which is featured by incorporating Tensor Singular Value
Decomposition (T-SVD) into a neural network's training pipeline. Overall,
high-order correlations among multiple domains and categories are fully
explored so as to better bridge the domain gap. Specifically, we impose
Tensor-Low-Rank (TLR) constraint on a tensor obtained by stacking up a group of
prototypical similarity matrices, aiming at capturing consistent data structure
across different domains. Furthermore, to avoid negative transfer brought by
noisy source data, we propose a novel uncertainty-aware weighting strategy to
adaptively assign weights to different source domains and samples based on the
result of uncertainty estimation. Extensive experiments conducted on public
benchmarks demonstrate the superiority of our model in addressing the task of
MDA compared to state-of-the-art methods.
Related papers
- Revisiting the Domain Shift and Sample Uncertainty in Multi-source
Active Domain Transfer [69.82229895838577]
Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a new target domain by actively selecting a limited number of target data to annotate.
This setting neglects the more practical scenario where training data are collected from multiple sources.
This motivates us to target a new and challenging setting of knowledge transfer that extends ADA from a single source domain to multiple source domains.
arXiv Detail & Related papers (2023-11-21T13:12:21Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - Multi-Source Unsupervised Domain Adaptation via Pseudo Target Domain [0.0]
Multi-source domain adaptation (MDA) aims to transfer knowledge from multiple source domains to an unlabeled target domain.
We propose a novel MDA approach, termed Pseudo Target for MDA (PTMDA)
PTMDA maps each group of source and target domains into a group-specific subspace using adversarial learning with a metric constraint.
We show that PTMDA as a whole can reduce the target error bound and leads to a better approximation of the target risk in MDA settings.
arXiv Detail & Related papers (2022-02-22T08:37:16Z) - A Novel Mix-normalization Method for Generalizable Multi-source Person
Re-identification [49.548815417844786]
Person re-identification (Re-ID) has achieved great success in the supervised scenario.
It is difficult to directly transfer the supervised model to arbitrary unseen domains due to the model overfitting to the seen source domains.
We propose MixNorm, which consists of domain-aware mix-normalization (DMN) and domain-ware center regularization (DCR)
arXiv Detail & Related papers (2022-01-24T18:09:38Z) - Improving Transferability of Domain Adaptation Networks Through Domain
Alignment Layers [1.3766148734487902]
Multi-source unsupervised domain adaptation (MSDA) aims at learning a predictor for an unlabeled domain by assigning weak knowledge from a bag of source models.
We propose to embed Multi-Source version of DomaIn Alignment Layers (MS-DIAL) at different levels of the predictor.
Our approach can improve state-of-the-art MSDA methods, yielding relative gains of up to +30.64% on their classification accuracies.
arXiv Detail & Related papers (2021-09-06T18:41:19Z) - Multi-Source domain adaptation via supervised contrastive learning and
confident consistency regularization [0.0]
Multi-Source Unsupervised Domain Adaptation (multi-source UDA) aims to learn a model from several labeled source domains.
We propose Contrastive Multi-Source Domain Adaptation (CMSDA) for multi-source UDA that addresses this limitation.
arXiv Detail & Related papers (2021-06-30T14:39:15Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Mutual Learning Network for Multi-Source Domain Adaptation [73.25974539191553]
We propose a novel multi-source domain adaptation method, Mutual Learning Network for Multiple Source Domain Adaptation (ML-MSDA)
Under the framework of mutual learning, the proposed method pairs the target domain with each single source domain to train a conditional adversarial domain adaptation network as a branch network.
The proposed method outperforms the comparison methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-03-29T04:31:43Z) - Multi-Source Domain Adaptation for Text Classification via
DistanceNet-Bandits [101.68525259222164]
We present a study of various distance-based measures in the context of NLP tasks, that characterize the dissimilarity between domains based on sample estimates.
We develop a DistanceNet model which uses these distance measures as an additional loss function to be minimized jointly with the task's loss function.
We extend this model to a novel DistanceNet-Bandit model, which employs a multi-armed bandit controller to dynamically switch between multiple source domains.
arXiv Detail & Related papers (2020-01-13T15:53:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.