Semi-Supervised Hypothesis Transfer for Source-Free Domain Adaptation
- URL: http://arxiv.org/abs/2107.06735v1
- Date: Wed, 14 Jul 2021 14:26:09 GMT
- Title: Semi-Supervised Hypothesis Transfer for Source-Free Domain Adaptation
- Authors: Ning Ma, Jiajun Bu, Lixian Lu, Jun Wen, Zhen Zhang, Sheng Zhou, Xifeng
Yan
- Abstract summary: We propose a novel domain adaptation method via hypothesis transfer without accessing source data at adaptation stage.
In order to fully use the limited target data, a semi-supervised mutual enhancement method is proposed.
Compared with state-of-the-art methods, our method gets up to 19.9% improvements on semi-supervised adaptation tasks.
- Score: 38.982377864475374
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain Adaptation has been widely used to deal with the distribution shift in
vision, language, multimedia etc. Most domain adaptation methods learn
domain-invariant features with data from both domains available. However, such
a strategy might be infeasible in practice when source data are unavailable due
to data-privacy concerns. To address this issue, we propose a novel adaptation
method via hypothesis transfer without accessing source data at adaptation
stage. In order to fully use the limited target data, a semi-supervised mutual
enhancement method is proposed, in which entropy minimization and augmented
label propagation are used iteratively to perform inter-domain and intra-domain
alignments. Compared with state-of-the-art methods, the experimental results on
three public datasets demonstrate that our method gets up to 19.9% improvements
on semi-supervised adaptation tasks.
Related papers
- Cross-Domain Label Propagation for Domain Adaptation with Discriminative
Graph Self-Learning [8.829109854586573]
Domain adaptation manages to transfer the knowledge of well-labeled source data to unlabeled target data.
We propose a novel domain adaptation method, which infers target pseudo-labels through cross-domain label propagation.
arXiv Detail & Related papers (2023-02-17T05:55:32Z) - Dual Moving Average Pseudo-Labeling for Source-Free Inductive Domain
Adaptation [45.024029784248825]
Unsupervised domain adaptation reduces the reliance on data annotation in deep learning by adapting knowledge from a source to a target domain.
For privacy and efficiency concerns, source-free domain adaptation extends unsupervised domain adaptation by adapting a pre-trained source model to an unlabeled target domain.
We propose a new semi-supervised fine-tuning method named Dual Moving Average Pseudo-Labeling (DMAPL) for source-free inductive domain adaptation.
arXiv Detail & Related papers (2022-12-15T23:20:13Z) - Labeling Where Adapting Fails: Cross-Domain Semantic Segmentation with
Point Supervision via Active Selection [81.703478548177]
Training models dedicated to semantic segmentation require a large amount of pixel-wise annotated data.
Unsupervised domain adaptation approaches aim at aligning the feature distributions between the labeled source and the unlabeled target data.
Previous works attempted to include human interactions in this process under the form of sparse single-pixel annotations in the target data.
We propose a new domain adaptation framework for semantic segmentation with annotated points via active selection.
arXiv Detail & Related papers (2022-06-01T01:52:28Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Semi-Supervised Adversarial Discriminative Domain Adaptation [18.15464889789663]
Domain adaptation is a potential method to train a powerful deep neural network, which can handle the absence of labeled data.
In this paper, we propose an improved adversarial domain adaptation method called Semi-Supervised Adversarial Discriminative Domain Adaptation (SADDA)
arXiv Detail & Related papers (2021-09-27T12:52:50Z) - Uncertainty-Guided Mixup for Semi-Supervised Domain Adaptation without
Source Data [37.26484185691251]
Source-free domain adaptation aims to solve the problem by performing domain adaptation without accessing the source data.
We propose uncertainty-guided Mixup to reduce the representation's intra-domain discrepancy and perform inter-domain alignment without directly accessing the source data.
Our method outperforms the recent semi-supervised baselines and the unsupervised variant achieves competitive performance.
arXiv Detail & Related papers (2021-07-14T13:54:02Z) - A Curriculum-style Self-training Approach for Source-Free Semantic Segmentation [91.13472029666312]
We propose a curriculum-style self-training approach for source-free domain adaptive semantic segmentation.
Our method yields state-of-the-art performance on source-free semantic segmentation tasks for both synthetic-to-real and adverse conditions.
arXiv Detail & Related papers (2021-06-22T10:21:39Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.