Get away from Style: Category-Guided Domain Adaptation for Semantic
Segmentation
- URL: http://arxiv.org/abs/2103.15467v1
- Date: Mon, 29 Mar 2021 10:00:50 GMT
- Title: Get away from Style: Category-Guided Domain Adaptation for Semantic
Segmentation
- Authors: Yantian Luo, Zhiming Wang, Danlan Huang, Ning Ge and Jianhua Lu
- Abstract summary: Unsupervised domain adaptation (UDA) becomes more and more popular in tackling real-world problems without ground truth of the target domain.
In this paper, we focus on UDA for semantic segmentation task.
We propose a style-independent content feature extraction mechanism to keep the style information of extracted features in the similar space.
Secondly, to keep the balance of pseudo labels on each category, we propose a category-guided threshold mechanism to choose category-wise pseudo labels for self-supervised learning.
- Score: 15.002381934551359
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (UDA) becomes more and more popular in
tackling real-world problems without ground truth of the target domain. Though
a mass of tedious annotation work is not needed, UDA unavoidably faces the
problem how to narrow the domain discrepancy to boost the transferring
performance. In this paper, we focus on UDA for semantic segmentation task.
Firstly, we propose a style-independent content feature extraction mechanism to
keep the style information of extracted features in the similar space, since
the style information plays a extremely slight role for semantic segmentation
compared with the content part. Secondly, to keep the balance of pseudo labels
on each category, we propose a category-guided threshold mechanism to choose
category-wise pseudo labels for self-supervised learning. The experiments are
conducted using GTA5 as the source domain, Cityscapes as the target domain. The
results show that our model outperforms the state-of-the-arts with a noticeable
gain on cross-domain adaptation tasks.
Related papers
- MoDA: Leveraging Motion Priors from Videos for Advancing Unsupervised Domain Adaptation in Semantic Segmentation [61.4598392934287]
This study introduces a different UDA scenarios where the target domain contains unlabeled video frames.
We design a textbfMotion-guided textbfDomain textbfAdaptive semantic segmentation framework (MoDA)
MoDA harnesses the self-supervised object motion cues to facilitate cross-domain alignment for segmentation task.
arXiv Detail & Related papers (2023-09-21T01:31:54Z) - Pulling Target to Source: A New Perspective on Domain Adaptive Semantic Segmentation [80.1412989006262]
Domain adaptive semantic segmentation aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose T2S-DA, which we interpret as a form of pulling Target to Source for Domain Adaptation.
arXiv Detail & Related papers (2023-05-23T07:09:09Z) - Continual Unsupervised Domain Adaptation for Semantic Segmentation using
a Class-Specific Transfer [9.46677024179954]
segmentation models do not generalize to unseen domains.
We propose a light-weight style transfer framework that incorporates two class-conditional AdaIN layers.
We extensively validate our approach on a synthetic sequence and further propose a challenging sequence consisting of real domains.
arXiv Detail & Related papers (2022-08-12T21:30:49Z) - Semantic-Aware Domain Generalized Segmentation [67.49163582961877]
Deep models trained on source domain lack generalization when evaluated on unseen target domains with different data distributions.
We propose a framework including two novel modules: Semantic-Aware Normalization (SAN) and Semantic-Aware Whitening (SAW)
Our approach shows significant improvements over existing state-of-the-art on various backbone networks.
arXiv Detail & Related papers (2022-04-02T09:09:59Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Unsupervised Domain Adaptation for Semantic Segmentation by Content
Transfer [13.004192914150646]
We tackle the unsupervised domain adaptation (UDA) for semantic segmentation.
Main problem of UDA for semantic segmentation relies on reducing the domain gap between the real image and synthetic image.
We propose a zero-style loss method to make the best of this effect.
arXiv Detail & Related papers (2020-12-23T09:01:00Z) - Domain Adaptive Semantic Segmentation Using Weak Labels [115.16029641181669]
We propose a novel framework for domain adaptation in semantic segmentation with image-level weak labels in the target domain.
We develop a weak-label classification module to enforce the network to attend to certain categories.
In experiments, we show considerable improvements with respect to the existing state-of-the-arts in UDA and present a new benchmark in the WDA setting.
arXiv Detail & Related papers (2020-07-30T01:33:57Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.