Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and
Curriculum Learning
- URL: http://arxiv.org/abs/2112.01948v1
- Date: Fri, 3 Dec 2021 14:47:32 GMT
- Title: Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and
Curriculum Learning
- Authors: Shengjia Zhang, Tiancheng Lin, Yi Xu
- Abstract summary: Unsupervised domain adaptation (UDA) improves classification performance on an unlabeled target domain by leveraging data from a fully labeled source domain.
We propose a model-agnostic two-stage learning framework, which greatly reduces flawed model predictions using soft pseudo-label strategy.
At the second stage, we propose a curriculum learning strategy to adaptively control the weighting between losses from the two domains.
- Score: 19.903568227077763
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: By leveraging data from a fully labeled source domain, unsupervised domain
adaptation (UDA) improves classification performance on an unlabeled target
domain through explicit discrepancy minimization of data distribution or
adversarial learning. As an enhancement, category alignment is involved during
adaptation to reinforce target feature discrimination by utilizing model
prediction. However, there remain unexplored problems about pseudo-label
inaccuracy incurred by wrong category predictions on target domain, and
distribution deviation caused by overfitting on source domain. In this paper,
we propose a model-agnostic two-stage learning framework, which greatly reduces
flawed model predictions using soft pseudo-label strategy and avoids
overfitting on source domain with a curriculum learning strategy.
Theoretically, it successfully decreases the combined risk in the upper bound
of expected error on the target domain. At the first stage, we train a model
with distribution alignment-based UDA method to obtain soft semantic label on
target domain with rather high confidence. To avoid overfitting on source
domain, at the second stage, we propose a curriculum learning strategy to
adaptively control the weighting between losses from the two domains so that
the focus of the training stage is gradually shifted from source distribution
to target distribution with prediction confidence boosted on the target domain.
Extensive experiments on two well-known benchmark datasets validate the
universal effectiveness of our proposed framework on promoting the performance
of the top-ranked UDA algorithms and demonstrate its consistent superior
performance.
Related papers
- Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Cal-SFDA: Source-Free Domain-adaptive Semantic Segmentation with
Differentiable Expected Calibration Error [50.86671887712424]
The prevalence of domain adaptive semantic segmentation has prompted concerns regarding source domain data leakage.
To circumvent the requirement for source data, source-free domain adaptation has emerged as a viable solution.
We propose a novel calibration-guided source-free domain adaptive semantic segmentation framework.
arXiv Detail & Related papers (2023-08-06T03:28:34Z) - Dual Moving Average Pseudo-Labeling for Source-Free Inductive Domain
Adaptation [45.024029784248825]
Unsupervised domain adaptation reduces the reliance on data annotation in deep learning by adapting knowledge from a source to a target domain.
For privacy and efficiency concerns, source-free domain adaptation extends unsupervised domain adaptation by adapting a pre-trained source model to an unlabeled target domain.
We propose a new semi-supervised fine-tuning method named Dual Moving Average Pseudo-Labeling (DMAPL) for source-free inductive domain adaptation.
arXiv Detail & Related papers (2022-12-15T23:20:13Z) - Source-free Unsupervised Domain Adaptation for Blind Image Quality
Assessment [20.28784839680503]
Existing learning-based methods for blind image quality assessment (BIQA) are heavily dependent on large amounts of annotated training data.
In this paper, we take the first step towards the source-free unsupervised domain adaptation (SFUDA) in a simple yet efficient manner.
We present a group of well-designed self-supervised objectives to guide the adaptation of the BN affine parameters towards the target domain.
arXiv Detail & Related papers (2022-07-17T09:42:36Z) - Boosting Cross-Domain Speech Recognition with Self-Supervision [35.01508881708751]
Cross-domain performance of automatic speech recognition (ASR) could be severely hampered due to mismatch between training and testing distributions.
Previous work has shown that self-supervised learning (SSL) or pseudo-labeling (PL) is effective in UDA by exploiting the self-supervisions of unlabeled data.
This work presents a systematic UDA framework to fully utilize the unlabeled data with self-supervision in the pre-training and fine-tuning paradigm.
arXiv Detail & Related papers (2022-06-20T14:02:53Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Robustified Domain Adaptation [13.14535125302501]
Unsupervised domain adaptation (UDA) is widely used to transfer knowledge from a labeled source domain to an unlabeled target domain.
The inevitable domain distribution deviation in UDA is a critical barrier to model robustness on the target domain.
We propose a novel Class-consistent Unsupervised Domain Adaptation (CURDA) framework for training robust UDA models.
arXiv Detail & Related papers (2020-11-18T22:21:54Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.