Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation
- URL: http://arxiv.org/abs/2004.00794v2
- Date: Tue, 9 Jun 2020 22:38:27 GMT
- Title: Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation
- Authors: Zhonghao Wang, Yunchao Wei, Rogerior Feris, Jinjun Xiong, Wen-Mei Hwu,
Thomas S. Huang, Humphrey Shi
- Abstract summary: A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
- Score: 97.8552697905657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning segmentation from synthetic data and adapting to real data can
significantly relieve human efforts in labelling pixel-level masks. A key
challenge of this task is how to alleviate the data distribution discrepancy
between the source and target domains, i.e. reducing domain shift. The common
approach to this problem is to minimize the discrepancy between feature
distributions from different domains through adversarial training. However,
directly aligning the feature distribution globally cannot guarantee
consistency from a local view (i.e. semantic-level), which prevents certain
semantic knowledge learned on the source domain from being applied to the
target domain. To tackle this issue, we propose a semi-supervised approach
named Alleviating Semantic-level Shift (ASS), which can successfully promote
the distribution consistency from both global and local views. Specifically,
leveraging a small number of labeled data from the target domain, we directly
extract semantic-level feature representations from both the source and the
target domains by averaging the features corresponding to same categories
advised by pixel-level masks. We then feed the produced features to the
discriminator to conduct semantic-level adversarial learning, which
collaborates with the adversarial learning from the global view to better
alleviate the domain shift. We apply our ASS to two domain adaptation tasks,
from GTA5 to Cityscapes and from Synthia to Cityscapes. Extensive experiments
demonstrate that: (1) ASS can significantly outperform the current unsupervised
state-of-the-arts by employing a small number of annotated samples from the
target domain; (2) ASS can beat the oracle model trained on the whole target
dataset by over 3 points by augmenting the synthetic source data with annotated
samples from the target domain without suffering from the prevalent problem of
overfitting to the source domain.
Related papers
- High-order Neighborhoods Know More: HyperGraph Learning Meets Source-free Unsupervised Domain Adaptation [34.08681468394247]
Source-free Unsupervised Domain Adaptation aims to classify target samples by only accessing a pre-trained source model and unlabelled target samples.
Existing methods normally exploit the pair-wise relation among target samples and attempt to discover their correlations by clustering these samples based on semantic features.
We propose a new SFDA method that exploits the high-order neighborhood relation and explicitly takes the domain shift effect into account.
arXiv Detail & Related papers (2024-05-11T05:07:43Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Semi-supervised Domain Adaptation for Semantic Segmentation [3.946367634483361]
We propose a novel two-step semi-supervised dual-domain adaptation (SSDDA) approach to address both cross- and intra-domain gaps in semantic segmentation.
We demonstrate that the proposed approach outperforms state-of-the-art methods on two common synthetic-to-real semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-20T16:13:00Z) - Domain Adaptive Semantic Segmentation without Source Data [50.18389578589789]
We investigate domain adaptive semantic segmentation without source data, which assumes that the model is pre-trained on the source domain.
We propose an effective framework for this challenging problem with two components: positive learning and negative learning.
Our framework can be easily implemented and incorporated with other methods to further enhance the performance.
arXiv Detail & Related papers (2021-10-13T04:12:27Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Unsupervised Domain Adaptation with Multiple Domain Discriminators and
Adaptive Self-Training [22.366638308792734]
Unsupervised Domain Adaptation (UDA) aims at improving the generalization capability of a model trained on a source domain to perform well on a target domain for which no labeled data is available.
We propose an approach to adapt a deep neural network trained on synthetic data to real scenes addressing the domain shift between the two different data distributions.
arXiv Detail & Related papers (2020-04-27T11:48:03Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.