Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain
Adaptive Semantic Segmentation
- URL: http://arxiv.org/abs/2003.03773v3
- Date: Thu, 15 Oct 2020 06:07:14 GMT
- Title: Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain
Adaptive Semantic Segmentation
- Authors: Zhedong Zheng and Yi Yang
- Abstract summary: This paper focuses on the unsupervised domain adaptation of transferring the knowledge from the source domain to the target domain in the context of semantic segmentation.
Existing approaches usually regard the pseudo label as the ground truth to fully exploit the unlabeled target-domain data.
This paper proposes to explicitly estimate the prediction uncertainty during training to rectify the pseudo label learning.
- Score: 49.295165476818866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on the unsupervised domain adaptation of transferring the
knowledge from the source domain to the target domain in the context of
semantic segmentation. Existing approaches usually regard the pseudo label as
the ground truth to fully exploit the unlabeled target-domain data. Yet the
pseudo labels of the target-domain data are usually predicted by the model
trained on the source domain. Thus, the generated labels inevitably contain the
incorrect prediction due to the discrepancy between the training domain and the
test domain, which could be transferred to the final adapted model and largely
compromises the training process. To overcome the problem, this paper proposes
to explicitly estimate the prediction uncertainty during training to rectify
the pseudo label learning for unsupervised semantic segmentation adaptation.
Given the input image, the model outputs the semantic segmentation prediction
as well as the uncertainty of the prediction. Specifically, we model the
uncertainty via the prediction variance and involve the uncertainty into the
optimization objective. To verify the effectiveness of the proposed method, we
evaluate the proposed method on two prevalent synthetic-to-real semantic
segmentation benchmarks, i.e., GTA5 -> Cityscapes and SYNTHIA -> Cityscapes, as
well as one cross-city benchmark, i.e., Cityscapes -> Oxford RobotCar. We
demonstrate through extensive experiments that the proposed approach (1)
dynamically sets different confidence thresholds according to the prediction
variance, (2) rectifies the learning from noisy pseudo labels, and (3) achieves
significant improvements over the conventional pseudo label learning and yields
competitive performance on all three benchmarks.
Related papers
- Adaptive Betweenness Clustering for Semi-Supervised Domain Adaptation [108.40945109477886]
We propose a novel SSDA approach named Graph-based Adaptive Betweenness Clustering (G-ABC) for achieving categorical domain alignment.
Our method outperforms previous state-of-the-art SSDA approaches, demonstrating the superiority of the proposed G-ABC algorithm.
arXiv Detail & Related papers (2024-01-21T09:57:56Z) - Domain Adaptive Object Detection via Balancing Between Self-Training and
Adversarial Learning [19.81071116581342]
Deep learning based object detectors struggle generalizing to a new target domain bearing significant variations in object and background.
Current methods align domains by using image or instance-level adversarial feature alignment.
We propose to leverage model's predictive uncertainty to strike the right balance between adversarial feature alignment and class-level alignment.
arXiv Detail & Related papers (2023-11-08T16:40:53Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Cal-SFDA: Source-Free Domain-adaptive Semantic Segmentation with
Differentiable Expected Calibration Error [50.86671887712424]
The prevalence of domain adaptive semantic segmentation has prompted concerns regarding source domain data leakage.
To circumvent the requirement for source data, source-free domain adaptation has emerged as a viable solution.
We propose a novel calibration-guided source-free domain adaptive semantic segmentation framework.
arXiv Detail & Related papers (2023-08-06T03:28:34Z) - Probabilistic Test-Time Generalization by Variational Neighbor-Labeling [62.158807685159736]
This paper strives for domain generalization, where models are trained exclusively on source domains before being deployed on unseen target domains.
Probability pseudo-labeling of target samples to generalize the source-trained model to the target domain at test time.
Variational neighbor labels that incorporate the information of neighboring target samples to generate more robust pseudo labels.
arXiv Detail & Related papers (2023-07-08T18:58:08Z) - Dirichlet-based Uncertainty Calibration for Active Domain Adaptation [33.33529827699169]
Active domain adaptation (DA) aims to maximally boost the model adaptation on a new target domain by actively selecting limited target data to annotate.
Traditional active learning methods may be less effective since they do not consider the domain shift issue.
We propose a itDirichlet-based Uncertainty (DUC) approach for active DA, which simultaneously achieves the mitigation of miscalibration and the selection of informative target samples.
arXiv Detail & Related papers (2023-02-27T14:33:29Z) - Adapting to Latent Subgroup Shifts via Concepts and Proxies [82.01141290360562]
We show that the optimal target predictor can be non-parametrically identified with the help of concept and proxy variables available only in the source domain.
For continuous observations, we propose a latent variable model specific to the data generation process at hand.
arXiv Detail & Related papers (2022-12-21T18:30:22Z) - Dual Moving Average Pseudo-Labeling for Source-Free Inductive Domain
Adaptation [45.024029784248825]
Unsupervised domain adaptation reduces the reliance on data annotation in deep learning by adapting knowledge from a source to a target domain.
For privacy and efficiency concerns, source-free domain adaptation extends unsupervised domain adaptation by adapting a pre-trained source model to an unlabeled target domain.
We propose a new semi-supervised fine-tuning method named Dual Moving Average Pseudo-Labeling (DMAPL) for source-free inductive domain adaptation.
arXiv Detail & Related papers (2022-12-15T23:20:13Z) - Boosting Cross-Domain Speech Recognition with Self-Supervision [35.01508881708751]
Cross-domain performance of automatic speech recognition (ASR) could be severely hampered due to mismatch between training and testing distributions.
Previous work has shown that self-supervised learning (SSL) or pseudo-labeling (PL) is effective in UDA by exploiting the self-supervisions of unlabeled data.
This work presents a systematic UDA framework to fully utilize the unlabeled data with self-supervision in the pre-training and fine-tuning paradigm.
arXiv Detail & Related papers (2022-06-20T14:02:53Z) - Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and
Curriculum Learning [19.903568227077763]
Unsupervised domain adaptation (UDA) improves classification performance on an unlabeled target domain by leveraging data from a fully labeled source domain.
We propose a model-agnostic two-stage learning framework, which greatly reduces flawed model predictions using soft pseudo-label strategy.
At the second stage, we propose a curriculum learning strategy to adaptively control the weighting between losses from the two domains.
arXiv Detail & Related papers (2021-12-03T14:47:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.