Unsupervised Adaptive Semantic Segmentation with Local Lipschitz
Constraint
- URL: http://arxiv.org/abs/2105.12939v1
- Date: Thu, 27 May 2021 04:28:45 GMT
- Title: Unsupervised Adaptive Semantic Segmentation with Local Lipschitz
Constraint
- Authors: Guanyu Cai, Lianghua He
- Abstract summary: We propose a two-stage adaptive semantic segmentation method based on the local Lipschitz constraint.
In the first stage, we propose the objective function to align different domains by exploiting intra-domain knowledge.
In the second stage, we use the local Lipschitzness regularization to estimate the probability of satisfying Lipschitzness for each pixel.
- Score: 11.465784695228015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in unsupervised domain adaptation have seen considerable
progress in semantic segmentation. Existing methods either align different
domains with adversarial training or involve the self-learning that utilizes
pseudo labels to conduct supervised training. The former always suffers from
the unstable training caused by adversarial training and only focuses on the
inter-domain gap that ignores intra-domain knowledge. The latter tends to put
overconfident label prediction on wrong categories, which propagates errors to
more samples. To solve these problems, we propose a two-stage adaptive semantic
segmentation method based on the local Lipschitz constraint that satisfies both
domain alignment and domain-specific exploration under a unified principle. In
the first stage, we propose the local Lipschitzness regularization as the
objective function to align different domains by exploiting intra-domain
knowledge, which explores a promising direction for non-adversarial adaptive
semantic segmentation. In the second stage, we use the local Lipschitzness
regularization to estimate the probability of satisfying Lipschitzness for each
pixel, and then dynamically sets the threshold of pseudo labels to conduct
self-learning. Such dynamical self-learning effectively avoids the error
propagation caused by noisy labels. Optimization in both stages is based on the
same principle, i.e., the local Lipschitz constraint, so that the knowledge
learned in the first stage can be maintained in the second stage. Further, due
to the model-agnostic property, our method can easily adapt to any CNN-based
semantic segmentation networks. Experimental results demonstrate the excellent
performance of our method on standard benchmarks.
Related papers
- Semantic Connectivity-Driven Pseudo-labeling for Cross-domain
Segmentation [89.41179071022121]
Self-training is a prevailing approach in cross-domain semantic segmentation.
We propose a novel approach called Semantic Connectivity-driven pseudo-labeling.
This approach formulates pseudo-labels at the connectivity level and thus can facilitate learning structured and low-noise semantics.
arXiv Detail & Related papers (2023-12-11T12:29:51Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Distribution Regularized Self-Supervised Learning for Domain Adaptation
of Semantic Segmentation [3.284878354988896]
This paper proposes a pixel-level distribution regularization scheme (DRSL) for self-supervised domain adaptation of semantic segmentation.
In a typical setting, the classification loss forces the semantic segmentation model to greedily learn the representations that capture inter-class variations.
We capture pixel-level intra-class variations through class-aware multi-modal distribution learning.
arXiv Detail & Related papers (2022-06-20T09:52:49Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Exploiting Negative Learning for Implicit Pseudo Label Rectification in
Source-Free Domain Adaptive Semantic Segmentation [12.716865774780704]
State-of-the-art methods for source free domain adaptation (SFDA) are subject to strict limits.
textitPR-SFDA achieves a performance of 49.0 mIoU, which is very close to that of the state-of-the-art counterparts.
arXiv Detail & Related papers (2021-06-23T02:20:31Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain
Adaptive Semantic Segmentation [49.295165476818866]
This paper focuses on the unsupervised domain adaptation of transferring the knowledge from the source domain to the target domain in the context of semantic segmentation.
Existing approaches usually regard the pseudo label as the ground truth to fully exploit the unlabeled target-domain data.
This paper proposes to explicitly estimate the prediction uncertainty during training to rectify the pseudo label learning.
arXiv Detail & Related papers (2020-03-08T12:37:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.