Dual Moving Average Pseudo-Labeling for Source-Free Inductive Domain
Adaptation
- URL: http://arxiv.org/abs/2212.08187v1
- Date: Thu, 15 Dec 2022 23:20:13 GMT
- Title: Dual Moving Average Pseudo-Labeling for Source-Free Inductive Domain
Adaptation
- Authors: Hao Yan, Yuhong Guo
- Abstract summary: Unsupervised domain adaptation reduces the reliance on data annotation in deep learning by adapting knowledge from a source to a target domain.
For privacy and efficiency concerns, source-free domain adaptation extends unsupervised domain adaptation by adapting a pre-trained source model to an unlabeled target domain.
We propose a new semi-supervised fine-tuning method named Dual Moving Average Pseudo-Labeling (DMAPL) for source-free inductive domain adaptation.
- Score: 45.024029784248825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation reduces the reliance on data annotation in
deep learning by adapting knowledge from a source to a target domain. For
privacy and efficiency concerns, source-free domain adaptation extends
unsupervised domain adaptation by adapting a pre-trained source model to an
unlabeled target domain without accessing the source data. However, most
existing source-free domain adaptation methods to date focus on the
transductive setting, where the target training set is also the testing set. In
this paper, we address source-free domain adaptation in the more realistic
inductive setting, where the target training and testing sets are mutually
exclusive. We propose a new semi-supervised fine-tuning method named Dual
Moving Average Pseudo-Labeling (DMAPL) for source-free inductive domain
adaptation. We first split the unlabeled training set in the target domain into
a pseudo-labeled confident subset and an unlabeled less-confident subset
according to the prediction confidence scores from the pre-trained source
model. Then we propose a soft-label moving-average updating strategy for the
unlabeled subset based on a moving-average prototypical classifier, which
gradually adapts the source model towards the target domain. Experiments show
that our proposed method achieves state-of-the-art performance and outperforms
previous methods by large margins.
Related papers
- Towards Source-free Domain Adaptive Semantic Segmentation via Importance-aware and Prototype-contrast Learning [26.544837987747766]
We propose an end-to-end source-free domain adaptation semantic segmentation method via Importance-Aware and Prototype-Contrast learning.
The proposed IAPC framework effectively extracts domain-invariant knowledge from the well-trained source model and learns domain-specific knowledge from the unlabeled target domain.
arXiv Detail & Related papers (2023-06-02T15:09:19Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and
Curriculum Learning [19.903568227077763]
Unsupervised domain adaptation (UDA) improves classification performance on an unlabeled target domain by leveraging data from a fully labeled source domain.
We propose a model-agnostic two-stage learning framework, which greatly reduces flawed model predictions using soft pseudo-label strategy.
At the second stage, we propose a curriculum learning strategy to adaptively control the weighting between losses from the two domains.
arXiv Detail & Related papers (2021-12-03T14:47:32Z) - Source-Free Domain Adaptive Fundus Image Segmentation with Denoised
Pseudo-Labeling [56.98020855107174]
Domain adaptation typically requires to access source domain data to utilize their distribution information for domain alignment with the target data.
In many real-world scenarios, the source data may not be accessible during the model adaptation in the target domain due to privacy issue.
We present a novel denoised pseudo-labeling method for this problem, which effectively makes use of the source model and unlabeled target data.
arXiv Detail & Related papers (2021-09-19T06:38:21Z) - A Curriculum-style Self-training Approach for Source-Free Semantic Segmentation [91.13472029666312]
We propose a curriculum-style self-training approach for source-free domain adaptive semantic segmentation.
Our method yields state-of-the-art performance on source-free semantic segmentation tasks for both synthetic-to-real and adverse conditions.
arXiv Detail & Related papers (2021-06-22T10:21:39Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.