Exploiting Negative Learning for Implicit Pseudo Label Rectification in
Source-Free Domain Adaptive Semantic Segmentation
- URL: http://arxiv.org/abs/2106.12123v1
- Date: Wed, 23 Jun 2021 02:20:31 GMT
- Title: Exploiting Negative Learning for Implicit Pseudo Label Rectification in
Source-Free Domain Adaptive Semantic Segmentation
- Authors: Xin Luo, Wei Chen, Yusong Tan, Chen Li, Yulin He, Xiaogang Jia
- Abstract summary: State-of-the-art methods for source free domain adaptation (SFDA) are subject to strict limits.
textitPR-SFDA achieves a performance of 49.0 mIoU, which is very close to that of the state-of-the-art counterparts.
- Score: 12.716865774780704
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is desirable to transfer the knowledge stored in a well-trained source
model onto non-annotated target domain in the absence of source data. However,
state-of-the-art methods for source free domain adaptation (SFDA) are subject
to strict limits: 1) access to internal specifications of source models is a
must; and 2) pseudo labels should be clean during self-training, making
critical tasks relying on semantic segmentation unreliable. Aiming at these
pitfalls, this study develops a domain adaptive solution to semantic
segmentation with pseudo label rectification (namely \textit{PR-SFDA}), which
operates in two phases: 1) \textit{Confidence-regularized unsupervised
learning}: Maximum squares loss applies to regularize the target model to
ensure the confidence in prediction; and 2) \textit{Noise-aware pseudo label
learning}: Negative learning enables tolerance to noisy pseudo labels in
training, meanwhile positive learning achieves fast convergence. Extensive
experiments have been performed on domain adaptive semantic segmentation
benchmark, \textit{GTA5 $\to$ Cityscapes}. Overall, \textit{PR-SFDA} achieves a
performance of 49.0 mIoU, which is very close to that of the state-of-the-art
counterparts. Note that the latter demand accesses to the source model's
internal specifications, whereas the \textit{PR-SFDA} solution needs none as a
sharp contrast.
Related papers
- Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Domain Adaptive Semantic Segmentation without Source Data [50.18389578589789]
We investigate domain adaptive semantic segmentation without source data, which assumes that the model is pre-trained on the source domain.
We propose an effective framework for this challenging problem with two components: positive learning and negative learning.
Our framework can be easily implemented and incorporated with other methods to further enhance the performance.
arXiv Detail & Related papers (2021-10-13T04:12:27Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Adaptive Pseudo-Label Refinement by Negative Ensemble Learning for
Source-Free Unsupervised Domain Adaptation [35.728603077621564]
Existing Unsupervised Domain Adaptation (UDA) methods presumes source and target domain data to be simultaneously available during training.
A pre-trained source model is always considered to be available, even though performing poorly on target due to the well-known domain shift problem.
We propose a unified method to tackle adaptive noise filtering and pseudo-label refinement.
arXiv Detail & Related papers (2021-03-29T22:18:34Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain
Adaptive Semantic Segmentation [49.295165476818866]
This paper focuses on the unsupervised domain adaptation of transferring the knowledge from the source domain to the target domain in the context of semantic segmentation.
Existing approaches usually regard the pseudo label as the ground truth to fully exploit the unlabeled target-domain data.
This paper proposes to explicitly estimate the prediction uncertainty during training to rectify the pseudo label learning.
arXiv Detail & Related papers (2020-03-08T12:37:19Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.