Unsupervised Domain Adaptation with Implicit Pseudo Supervision for
Semantic Segmentation
- URL: http://arxiv.org/abs/2204.06747v1
- Date: Thu, 14 Apr 2022 04:06:22 GMT
- Title: Unsupervised Domain Adaptation with Implicit Pseudo Supervision for
Semantic Segmentation
- Authors: Wanyu Xu, Zengmao Wang, Wei Bian
- Abstract summary: We train the model by the pseudo labels which are implicitly produced by itself to learn new complementary knowledge about target domain.
Experiments on GTA5 to Cityscapes and SYNTHIA to Cityscapes tasks show that the proposed method has considerable improvements.
- Score: 7.748333539159297
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Pseudo-labelling is a popular technique in unsuper-vised domain adaptation
for semantic segmentation. However, pseudo labels are noisy and inevitably have
confirmation bias due to the discrepancy between source and target domains and
training process. In this paper, we train the model by the pseudo labels which
are implicitly produced by itself to learn new complementary knowledge about
target domain. Specifically, we propose a tri-learning architecture, where
every two branches produce the pseudo labels to train the third one. And we
align the pseudo labels based on the similarity of the probability
distributions for each two branches. To further implicitly utilize the pseudo
labels, we maximize the distances of features for different classes and
minimize the distances for the same classes by triplet loss. Extensive
experiments on GTA5 to Cityscapes and SYNTHIA to Cityscapes tasks show that the
proposed method has considerable improvements.
Related papers
- Towards Modality-agnostic Label-efficient Segmentation with Entropy-Regularized Distribution Alignment [62.73503467108322]
This topic is widely studied in 3D point cloud segmentation due to the difficulty of annotating point clouds densely.
Until recently, pseudo-labels have been widely employed to facilitate training with limited ground-truth labels.
Existing pseudo-labeling approaches could suffer heavily from the noises and variations in unlabelled data.
We propose a novel learning strategy to regularize the pseudo-labels generated for training, thus effectively narrowing the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2024-08-29T13:31:15Z) - Semantic Connectivity-Driven Pseudo-labeling for Cross-domain
Segmentation [89.41179071022121]
Self-training is a prevailing approach in cross-domain semantic segmentation.
We propose a novel approach called Semantic Connectivity-driven pseudo-labeling.
This approach formulates pseudo-labels at the connectivity level and thus can facilitate learning structured and low-noise semantics.
arXiv Detail & Related papers (2023-12-11T12:29:51Z) - Learning Triangular Distribution in Visual World [5.796362696313493]
Convolution neural network is successful in pervasive vision tasks, including label distribution learning.
We study the mathematical connection between feature and its label, presenting a general and simple framework for label distribution learning.
We propose a so-called Triangular Distribution Transform (TDT) to build an injective function between feature and label, guaranteeing that any symmetric feature discrepancy linearly reflects the difference between labels.
arXiv Detail & Related papers (2023-11-30T15:02:13Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic
Segmentation [52.62441404064957]
Domain adaptive semantic segmentation attempts to make satisfactory dense predictions on an unlabeled target domain by utilizing the model trained on a labeled source domain.
Many methods tend to alleviate noisy pseudo labels, however, they ignore intrinsic connections among cross-domain pixels with similar semantic concepts.
We propose Semantic-Guided Pixel Contrast (SePiCo), a novel one-stage adaptation framework that highlights the semantic concepts of individual pixel.
arXiv Detail & Related papers (2022-04-19T11:16:29Z) - Class-Balanced Pixel-Level Self-Labeling for Domain Adaptive Semantic
Segmentation [31.50802009879241]
Domain adaptive semantic segmentation aims to learn a model with the supervision of source domain data, and produce dense predictions on unlabeled target domain.
One popular solution to this challenging task is self-training, which selects high-scoring predictions on target samples as pseudo labels for training.
We propose to directly explore the intrinsic pixel distributions of target domain data, instead of heavily relying on the source domain.
arXiv Detail & Related papers (2022-03-18T04:56:20Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - One Thing One Click: A Self-Training Approach for Weakly Supervised 3D
Semantic Segmentation [78.36781565047656]
We propose "One Thing One Click," meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our results are also comparable to those of the fully supervised counterparts.
arXiv Detail & Related papers (2021-04-06T02:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.