Classes Matter: A Fine-grained Adversarial Approach to Cross-domain
Semantic Segmentation
- URL: http://arxiv.org/abs/2007.09222v1
- Date: Fri, 17 Jul 2020 20:50:59 GMT
- Title: Classes Matter: A Fine-grained Adversarial Approach to Cross-domain
Semantic Segmentation
- Authors: Haoran Wang, Tong Shen, Wei Zhang, Lingyu Duan, Tao Mei
- Abstract summary: We propose a fine-grained adversarial learning strategy for class-level feature alignment.
We adopt a fine-grained domain discriminator that not only plays as a domain distinguisher, but also differentiates domains at class level.
An analysis with Class Center Distance (CCD) validates that our fine-grained adversarial strategy achieves better class-level alignment.
- Score: 95.10255219396109
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite great progress in supervised semantic segmentation,a large
performance drop is usually observed when deploying the model in the wild.
Domain adaptation methods tackle the issue by aligning the source domain and
the target domain. However, most existing methods attempt to perform the
alignment from a holistic view, ignoring the underlying class-level data
structure in the target domain. To fully exploit the supervision in the source
domain, we propose a fine-grained adversarial learning strategy for class-level
feature alignment while preserving the internal structure of semantics across
domains. We adopt a fine-grained domain discriminator that not only plays as a
domain distinguisher, but also differentiates domains at class level. The
traditional binary domain labels are also generalized to domain encodings as
the supervision signal to guide the fine-grained feature alignment. An analysis
with Class Center Distance (CCD) validates that our fine-grained adversarial
strategy achieves better class-level alignment compared to other
state-of-the-art methods. Our method is easy to implement and its effectiveness
is evaluated on three classical domain adaptation tasks, i.e., GTA5 to
Cityscapes, SYNTHIA to Cityscapes and Cityscapes to Cross-City. Large
performance gains show that our method outperforms other global feature
alignment based and class-wise alignment based counterparts. The code is
publicly available at https://github.com/JDAI-CV/FADA.
Related papers
- Prototypical Contrast Adaptation for Domain Adaptive Semantic
Segmentation [52.63046674453461]
Prototypical Contrast Adaptation (ProCA) is a contrastive learning method for unsupervised domain adaptive semantic segmentation.
ProCA incorporates inter-class information into class-wise prototypes, and adopts the class-centered distribution alignment for adaptation.
arXiv Detail & Related papers (2022-07-14T04:54:26Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - More Separable and Easier to Segment: A Cluster Alignment Method for
Cross-Domain Semantic Segmentation [41.81843755299211]
We propose a new UDA semantic segmentation approach based on domain assumption closeness to alleviate the above problems.
Specifically, a prototype clustering strategy is applied to cluster pixels with the same semantic, which will better maintain associations among target domain pixels.
Experiments conducted on GTA5 and SYNTHIA proved the effectiveness of our method.
arXiv Detail & Related papers (2021-05-07T10:24:18Z) - Prototypical Cross-domain Self-supervised Learning for Few-shot
Unsupervised Domain Adaptation [91.58443042554903]
We propose an end-to-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA)
PCS not only performs cross-domain low-level feature alignment, but it also encodes and aligns semantic structures in the shared embedding space across domains.
Compared with state-of-the-art methods, PCS improves the mean classification accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on Office, Office-Home, VisDA-2017, and DomainNet, respectively.
arXiv Detail & Related papers (2021-03-31T02:07:42Z) - Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining
and Consistency [93.89773386634717]
Visual domain adaptation involves learning to classify images from a target visual domain using labels available in a different source domain.
We show that in the presence of a few target labels, simple techniques like self-supervision (via rotation prediction) and consistency regularization can be effective without any adversarial alignment to learn a good target classifier.
Our Pretraining and Consistency (PAC) approach, can achieve state of the art accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets.
arXiv Detail & Related papers (2021-01-29T18:40:17Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.