Learning transferable and discriminative features for unsupervised
domain adaptation
- URL: http://arxiv.org/abs/2003.11723v2
- Date: Sat, 26 Jun 2021 03:54:44 GMT
- Title: Learning transferable and discriminative features for unsupervised
domain adaptation
- Authors: Yuntao Du, Ruiting Zhang, Xiaowen Zhang, Yirong Yao, Hengyang Lu,
Chongjun Wang
- Abstract summary: Unsupervised domain adaptation is able to overcome this challenge by transferring knowledge from a labeled source domain to an unlabeled target domain.
In this paper, a novel method called textitlearning TransFerable and Discriminative Features for unsupervised domain adaptation (TLearning) is proposed to optimize these two objectives simultaneously.
Comprehensive experiments are conducted on five real-world datasets and the results verify the effectiveness of the proposed method.
- Score: 6.37626180021317
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although achieving remarkable progress, it is very difficult to induce a
supervised classifier without any labeled data. Unsupervised domain adaptation
is able to overcome this challenge by transferring knowledge from a labeled
source domain to an unlabeled target domain. Transferability and
discriminability are two key criteria for characterizing the superiority of
feature representations to enable successful domain adaptation. In this paper,
a novel method called \textit{learning TransFerable and Discriminative Features
for unsupervised domain adaptation} (TFDF) is proposed to optimize these two
objectives simultaneously. On the one hand, distribution alignment is performed
to reduce domain discrepancy and learn more transferable representations.
Instead of adopting \textit{Maximum Mean Discrepancy} (MMD) which only captures
the first-order statistical information to measure distribution discrepancy, we
adopt a recently proposed statistic called \textit{Maximum Mean and Covariance
Discrepancy} (MMCD), which can not only capture the first-order statistical
information but also capture the second-order statistical information in the
reproducing kernel Hilbert space (RKHS). On the other hand, we propose to
explore both local discriminative information via manifold regularization and
global discriminative information via minimizing the proposed \textit{class
confusion} objective to learn more discriminative features, respectively. We
integrate these two objectives into the \textit{Structural Risk Minimization}
(RSM) framework and learn a domain-invariant classifier. Comprehensive
experiments are conducted on five real-world datasets and the results verify
the effectiveness of the proposed method.
Related papers
- Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Balancing Discriminability and Transferability for Source-Free Domain
Adaptation [55.143687986324935]
Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations.
The requirement of simultaneous access to labeled source and unlabeled target renders them unsuitable for the challenging source-free DA setting.
We derive novel insights to show that a mixup between original and corresponding translated generic samples enhances the discriminability-transferability trade-off.
arXiv Detail & Related papers (2022-06-16T09:06:22Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Coarse to Fine: Domain Adaptive Crowd Counting via Adversarial Scoring
Network [58.05473757538834]
This paper proposes a novel adversarial scoring network (ASNet) to bridge the gap across domains from coarse to fine granularity.
Three sets of migration experiments show that the proposed methods achieve state-of-the-art counting performance.
arXiv Detail & Related papers (2021-07-27T14:47:24Z) - Learning Invariant Representation with Consistency and Diversity for
Semi-supervised Source Hypothesis Transfer [46.68586555288172]
We propose a novel task named Semi-supervised Source Hypothesis Transfer (SSHT), which performs domain adaptation based on source trained model, to generalize well in target domain with a few supervisions.
We propose Consistency and Diversity Learning (CDL), a simple but effective framework for SSHT by facilitating prediction consistency between two randomly augmented unlabeled data.
Experimental results show that our method outperforms existing SSDA methods and unsupervised model adaptation methods on DomainNet, Office-Home and Office-31 datasets.
arXiv Detail & Related papers (2021-07-07T04:14:24Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Unsupervised Domain Adaptation via Discriminative Manifold Propagation [26.23123292060868]
Unsupervised domain adaptation is effective in leveraging rich information from a labeled source domain to an unlabeled target domain.
The proposed method can be used to tackle a series of variants of domain adaptation problems, including both vanilla and partial settings.
arXiv Detail & Related papers (2020-08-23T12:31:37Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z) - Unsupervised Domain Adaptation via Discriminative Manifold Embedding and
Alignment [23.72562139715191]
Unsupervised domain adaptation is effective in leveraging the rich information from the source domain to the unsupervised target domain.
The hard-assigned pseudo labels on the target domain are risky to the intrinsic data structure.
A consistent manifold learning framework is proposed to achieve transferability and discriminability consistently.
arXiv Detail & Related papers (2020-02-20T11:06:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.