Prototypical Pseudo Label Denoising and Target Structure Learning for
Domain Adaptive Semantic Segmentation
- URL: http://arxiv.org/abs/2101.10979v2
- Date: Thu, 28 Jan 2021 15:10:37 GMT
- Title: Prototypical Pseudo Label Denoising and Target Structure Learning for
Domain Adaptive Semantic Segmentation
- Authors: Pan Zhang, Bo Zhang, Ting Zhang, Dong Chen, Yong Wang, Fang Wen
- Abstract summary: A competitive approach in domain adaptive segmentation trains the network with the pseudo labels on the target domain.
We take one step further and exploit the feature distances from prototypes that provide richer information than mere prototypes.
We find that distilling the already learned knowledge to a self-supervised pretrained model further boosts the performance.
- Score: 24.573242887937834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-training is a competitive approach in domain adaptive segmentation,
which trains the network with the pseudo labels on the target domain. However
inevitably, the pseudo labels are noisy and the target features are dispersed
due to the discrepancy between source and target domains. In this paper, we
rely on representative prototypes, the feature centroids of classes, to address
the two issues for unsupervised domain adaptation. In particular, we take one
step further and exploit the feature distances from prototypes that provide
richer information than mere prototypes. Specifically, we use it to estimate
the likelihood of pseudo labels to facilitate online correction in the course
of training. Meanwhile, we align the prototypical assignments based on relative
feature distances for two different views of the same target, producing a more
compact target feature space. Moreover, we find that distilling the already
learned knowledge to a self-supervised pretrained model further boosts the
performance. Our method shows tremendous performance advantage over
state-of-the-art methods. We will make the code publicly available.
Related papers
- PiPa++: Towards Unification of Domain Adaptive Semantic Segmentation via Self-supervised Learning [34.786268652516355]
Unsupervised domain adaptive segmentation aims to improve the segmentation accuracy of models on target domains without relying on labeled data from those domains.
It seeks to align the feature representations of the source domain (where labeled data is available) and the target domain (where only unlabeled data is present)
arXiv Detail & Related papers (2024-07-24T08:53:29Z) - Pulling Target to Source: A New Perspective on Domain Adaptive Semantic Segmentation [80.1412989006262]
Domain adaptive semantic segmentation aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose T2S-DA, which we interpret as a form of pulling Target to Source for Domain Adaptation.
arXiv Detail & Related papers (2023-05-23T07:09:09Z) - DecoupleNet: Decoupled Network for Domain Adaptive Semantic Segmentation [78.30720731968135]
Unsupervised domain adaptation in semantic segmentation has been raised to alleviate the reliance on expensive pixel-wise annotations.
We propose DecoupleNet that alleviates source domain overfitting and enables the final model to focus more on the segmentation task.
We also put forward Self-Discrimination (SD) and introduce an auxiliary classifier to learn more discriminative target domain features with pseudo labels.
arXiv Detail & Related papers (2022-07-20T15:47:34Z) - Contrastive Test-Time Adaptation [83.73506803142693]
We propose a novel way to leverage self-supervised contrastive learning to facilitate target feature learning.
We produce pseudo labels online and refine them via soft voting among their nearest neighbors in the target feature space.
Our method, AdaContrast, achieves state-of-the-art performance on major benchmarks.
arXiv Detail & Related papers (2022-04-21T19:17:22Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Domain Adaptation for Semantic Segmentation via Patch-Wise Contrastive
Learning [62.7588467386166]
We leverage contrastive learning to bridge the domain gap by aligning the features of structurally similar label patches across domains.
Our approach consistently outperforms state-of-the-art unsupervised and semi-supervised methods on two challenging domain adaptive segmentation tasks.
arXiv Detail & Related papers (2021-04-22T13:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.