Self-Supervised Noisy Label Learning for Source-Free Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2102.11614v1
- Date: Tue, 23 Feb 2021 10:51:45 GMT
- Title: Self-Supervised Noisy Label Learning for Source-Free Unsupervised Domain
Adaptation
- Authors: Weijie Chen and Luojun Lin and Shicai Yang and Di Xie and Shiliang Pu
and Yueting Zhuang and Wenqi Ren
- Abstract summary: We propose a novel Self-Supervised Noisy Label Learning method.
Our method can easily achieve state-of-the-art results and surpass other methods by a very large margin.
- Score: 87.60688582088194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is a strong prerequisite to access source data freely in many existing
unsupervised domain adaptation approaches. However, source data is agnostic in
many practical scenarios due to the constraints of expensive data transmission
and data privacy protection. Usually, the given source domain pre-trained model
is expected to optimize with only unlabeled target data, which is termed as
source-free unsupervised domain adaptation. In this paper, we solve this
problem from the perspective of noisy label learning, since the given
pre-trained model can pre-generate noisy label for unlabeled target data via
directly network inference. Under this problem modeling, incorporating
self-supervised learning, we propose a novel Self-Supervised Noisy Label
Learning method, which can effectively fine-tune the pre-trained model with
pre-generated label as well as selfgenerated label on the fly. Extensive
experiments had been conducted to validate its effectiveness. Our method can
easily achieve state-of-the-art results and surpass other methods by a very
large margin. Code will be released.
Related papers
- Learning in the Wild: Towards Leveraging Unlabeled Data for Effectively
Tuning Pre-trained Code Models [38.7352992942213]
We propose a novel approach named HINT to improve pre-trained code models with large-scale unlabeled datasets.
HINT includes two main modules: HybrId pseudo-labeled data selection and Noise-tolerant Training.
The experimental results show that HINT can better leverage those unlabeled data in a task-specific way.
arXiv Detail & Related papers (2024-01-02T06:39:00Z) - Overcoming Label Noise for Source-free Unsupervised Video Domain
Adaptation [39.71690595469969]
We present a self-training based source-free video domain adaptation approach.
We use the source pre-trained model to generate pseudo-labels for the target domain samples.
We further enhance the adaptation performance by implementing a teacher-student framework.
arXiv Detail & Related papers (2023-11-30T14:06:27Z) - Uncertainty-aware Mean Teacher for Source-free Unsupervised Domain
Adaptive 3D Object Detection [6.345037597566315]
Pseudo-label based self training approaches are a popular method for source-free unsupervised domain adaptation.
We propose an uncertainty-aware mean teacher framework which implicitly filters incorrect pseudo-labels during training.
arXiv Detail & Related papers (2021-09-29T18:17:09Z) - Source-Free Domain Adaptive Fundus Image Segmentation with Denoised
Pseudo-Labeling [56.98020855107174]
Domain adaptation typically requires to access source domain data to utilize their distribution information for domain alignment with the target data.
In many real-world scenarios, the source data may not be accessible during the model adaptation in the target domain due to privacy issue.
We present a novel denoised pseudo-labeling method for this problem, which effectively makes use of the source model and unlabeled target data.
arXiv Detail & Related papers (2021-09-19T06:38:21Z) - A Curriculum-style Self-training Approach for Source-Free Semantic Segmentation [91.13472029666312]
We propose a curriculum-style self-training approach for source-free domain adaptive semantic segmentation.
Our method yields state-of-the-art performance on source-free semantic segmentation tasks for both synthetic-to-real and adverse conditions.
arXiv Detail & Related papers (2021-06-22T10:21:39Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - A Free Lunch for Unsupervised Domain Adaptive Object Detection without
Source Data [69.091485888121]
Unsupervised domain adaptation assumes that source and target domain data are freely available and usually trained together to reduce the domain gap.
We propose a source data-free domain adaptive object detection (SFOD) framework via modeling it into a problem of learning with noisy labels.
arXiv Detail & Related papers (2020-12-10T01:42:35Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.