Learning Invariant Representation with Consistency and Diversity for
Semi-supervised Source Hypothesis Transfer
- URL: http://arxiv.org/abs/2107.03008v1
- Date: Wed, 7 Jul 2021 04:14:24 GMT
- Title: Learning Invariant Representation with Consistency and Diversity for
Semi-supervised Source Hypothesis Transfer
- Authors: Xiaodong Wang, Junbao Zhuo, Shuhao Cui, Shuhui Wang
- Abstract summary: We propose a novel task named Semi-supervised Source Hypothesis Transfer (SSHT), which performs domain adaptation based on source trained model, to generalize well in target domain with a few supervisions.
We propose Consistency and Diversity Learning (CDL), a simple but effective framework for SSHT by facilitating prediction consistency between two randomly augmented unlabeled data.
Experimental results show that our method outperforms existing SSDA methods and unsupervised model adaptation methods on DomainNet, Office-Home and Office-31 datasets.
- Score: 46.68586555288172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised domain adaptation (SSDA) aims to solve tasks in target domain
by utilizing transferable information learned from the available source domain
and a few labeled target data. However, source data is not always accessible in
practical scenarios, which restricts the application of SSDA in real world
circumstances. In this paper, we propose a novel task named Semi-supervised
Source Hypothesis Transfer (SSHT), which performs domain adaptation based on
source trained model, to generalize well in target domain with a few
supervisions. In SSHT, we are facing two challenges: (1) The insufficient
labeled target data may result in target features near the decision boundary,
with the increased risk of mis-classification; (2) The data are usually
imbalanced in source domain, so the model trained with these data is biased.
The biased model is prone to categorize samples of minority categories into
majority ones, resulting in low prediction diversity. To tackle the above
issues, we propose Consistency and Diversity Learning (CDL), a simple but
effective framework for SSHT by facilitating prediction consistency between two
randomly augmented unlabeled data and maintaining the prediction diversity when
adapting model to target domain. Encouraging consistency regularization brings
difficulty to memorize the few labeled target data and thus enhances the
generalization ability of the learned model. We further integrate Batch
Nuclear-norm Maximization into our method to enhance the discriminability and
diversity. Experimental results show that our method outperforms existing SSDA
methods and unsupervised model adaptation methods on DomainNet, Office-Home and
Office-31 datasets. The code is available at
https://github.com/Wang-xd1899/SSHT.
Related papers
- CAusal and collaborative proxy-tasKs lEarning for Semi-Supervised Domain
Adaptation [20.589323508870592]
Semi-supervised domain adaptation (SSDA) adapts a learner to a new domain by effectively utilizing source domain data and a few labeled target samples.
We show that the proposed model significantly outperforms SOTA methods in terms of effectiveness and generalisability on SSDA datasets.
arXiv Detail & Related papers (2023-03-30T16:48:28Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Dynamic Domain Adaptation for Efficient Inference [12.713628738434881]
Domain adaptation (DA) enables knowledge transfer from a labeled source domain to an unlabeled target domain.
Most prior DA approaches leverage complicated and powerful deep neural networks to improve the adaptation capacity.
We propose a dynamic domain adaptation (DDA) framework, which can simultaneously achieve efficient target inference in low-resource scenarios.
arXiv Detail & Related papers (2021-03-26T08:53:16Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.