Undoing the Damage of Label Shift for Cross-domain Semantic Segmentation
- URL: http://arxiv.org/abs/2204.05546v1
- Date: Tue, 12 Apr 2022 06:18:50 GMT
- Title: Undoing the Damage of Label Shift for Cross-domain Semantic Segmentation
- Authors: Yahao Liu, Jinhong Deng, Jiale Tao, Tong Chu, Lixin Duan, Wen Li
- Abstract summary: We show that the damage of label shift can be overcome by aligning the data conditional distribution and correcting the posterior probability.
We conduct extensive experiments on the benchmark datasets of urban scenes, including GTA5 to Cityscapes and SYNTHIA to Cityscapes.
Our model equipped with a self-training strategy reaches 59.3% mIoU on GTA5 to Cityscapes, pushing to a new state-of-the-art.
- Score: 27.44765822956167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing works typically treat cross-domain semantic segmentation (CDSS) as a
data distribution mismatch problem and focus on aligning the marginal
distribution or conditional distribution. However, the label shift issue is
unfortunately overlooked, which actually commonly exists in the CDSS task, and
often causes a classifier bias in the learnt model. In this paper, we give an
in-depth analysis and show that the damage of label shift can be overcome by
aligning the data conditional distribution and correcting the posterior
probability. To this end, we propose a novel approach to undo the damage of the
label shift problem in CDSS. In implementation, we adopt class-level feature
alignment for conditional distribution alignment, as well as two simple yet
effective methods to rectify the classifier bias from source to target by
remolding the classifier predictions. We conduct extensive experiments on the
benchmark datasets of urban scenes, including GTA5 to Cityscapes and SYNTHIA to
Cityscapes, where our proposed approach outperforms previous methods by a large
margin. For instance, our model equipped with a self-training strategy reaches
59.3% mIoU on GTA5 to Cityscapes, pushing to a new state-of-the-art. The code
will be available at https://github.com/manmanjun/Undoing UDA.
Related papers
- GeT: Generative Target Structure Debiasing for Domain Adaptation [67.17025068995835]
Domain adaptation (DA) aims to transfer knowledge from a fully labeled source to a scarcely labeled or totally unlabeled target under domain shift.
Recently, semi-supervised learning-based (SSL) techniques that leverage pseudo labeling have been increasingly used in DA.
In this paper, we propose GeT that learns a non-bias target embedding distribution with high quality pseudo labels.
arXiv Detail & Related papers (2023-08-20T08:52:43Z) - Chaos to Order: A Label Propagation Perspective on Source-Free Domain
Adaptation [8.27771856472078]
We present Chaos to Order (CtO), a novel approach for source-free domain adaptation (SFDA)
CtO strives to constrain semantic credibility and propagate label information among target subpopulations.
Empirical evidence demonstrates that CtO outperforms the state of the arts on three public benchmarks.
arXiv Detail & Related papers (2023-01-20T03:39:35Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Domain Adaptation under Open Set Label Shift [39.424134505152544]
We introduce the problem of domain adaptation under Open Set Label Shift (OSLS)
OSLS subsumes domain adaptation under label shift and Positive-Unlabeled (PU) learning.
We propose practical methods for both tasks that leverage black-box predictors.
arXiv Detail & Related papers (2022-07-26T17:09:48Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Cycle Self-Training for Domain Adaptation [85.14659717421533]
Cycle Self-Training (CST) is a principled self-training algorithm that enforces pseudo-labels to generalize across domains.
CST recovers target ground truth, while both invariant feature learning and vanilla self-training fail.
Empirical results indicate that CST significantly improves over prior state-of-the-arts in standard UDA benchmarks.
arXiv Detail & Related papers (2021-03-05T10:04:25Z) - Coping with Label Shift via Distributionally Robust Optimisation [72.80971421083937]
We propose a model that minimises an objective based on distributionally robust optimisation (DRO)
We then design and analyse a gradient descent-proximal mirror ascent algorithm tailored for large-scale problems to optimise the proposed objective.
arXiv Detail & Related papers (2020-10-23T08:33:04Z) - Posterior Re-calibration for Imbalanced Datasets [33.379680556475314]
Neural Networks can perform poorly when the training label distribution is heavily imbalanced.
We derive a post-training prior rebalancing technique that can be solved through a KL-divergence based optimization.
Our results on six different datasets and five different architectures show state of art accuracy.
arXiv Detail & Related papers (2020-10-22T15:57:14Z) - Partially-Shared Variational Auto-encoders for Unsupervised Domain
Adaptation with Target Shift [11.873435088539459]
This paper proposes a novel approach for unsupervised domain adaptation (UDA) with target shift.
The proposed method, partially shared variational autoencoders (PS-VAEs), uses pair-wise feature alignment instead of feature distribution matching.
PS-VAEs inter-convert domain of each sample by a CycleGAN-based architecture while preserving its label-related content.
arXiv Detail & Related papers (2020-01-22T06:41:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.