Learning from Scale-Invariant Examples for Domain Adaptation in Semantic
Segmentation
- URL: http://arxiv.org/abs/2007.14449v1
- Date: Tue, 28 Jul 2020 19:40:45 GMT
- Title: Learning from Scale-Invariant Examples for Domain Adaptation in Semantic
Segmentation
- Authors: M.Naseer Subhani and Mohsen Ali
- Abstract summary: We propose a novel approach of exploiting scale-invariance property of semantic segmentation model for self-supervised domain adaptation.
Our algorithm is based on a reasonable assumption that, in general, regardless of the size of the object and stuff (given context) the semantic labeling should be unchanged.
We show that this constraint is violated over the images of the target domain, and hence could be used to transfer labels in-between differently scaled patches.
- Score: 6.320141734801679
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-supervised learning approaches for unsupervised domain adaptation (UDA)
of semantic segmentation models suffer from challenges of predicting and
selecting reasonable good quality pseudo labels. In this paper, we propose a
novel approach of exploiting scale-invariance property of the semantic
segmentation model for self-supervised domain adaptation. Our algorithm is
based on a reasonable assumption that, in general, regardless of the size of
the object and stuff (given context) the semantic labeling should be unchanged.
We show that this constraint is violated over the images of the target domain,
and hence could be used to transfer labels in-between differently scaled
patches. Specifically, we show that semantic segmentation model produces output
with high entropy when presented with scaled-up patches of target domain, in
comparison to when presented original size images. These scale-invariant
examples are extracted from the most confident images of the target domain.
Dynamic class specific entropy thresholding mechanism is presented to filter
out unreliable pseudo-labels. Furthermore, we also incorporate the focal loss
to tackle the problem of class imbalance in self-supervised learning. Extensive
experiments have been performed, and results indicate that exploiting the
scale-invariant labeling, we outperform existing self-supervised based
state-of-the-art domain adaptation methods. Specifically, we achieve 1.3% and
3.8% of lead for GTA5 to Cityscapes and SYNTHIA to Cityscapes with VGG16-FCN8
baseline network.
Related papers
- Adaptive Betweenness Clustering for Semi-Supervised Domain Adaptation [108.40945109477886]
We propose a novel SSDA approach named Graph-based Adaptive Betweenness Clustering (G-ABC) for achieving categorical domain alignment.
Our method outperforms previous state-of-the-art SSDA approaches, demonstrating the superiority of the proposed G-ABC algorithm.
arXiv Detail & Related papers (2024-01-21T09:57:56Z) - Domain Adaptation for Medical Image Segmentation using
Transformation-Invariant Self-Training [7.738197566031678]
We propose a semi-supervised learning strategy for domain adaptation termed transformation-invariant self-training (TI-ST)
The proposed method assesses pixel-wise pseudo-labels' reliability and filters out unreliable detections during self-training.
arXiv Detail & Related papers (2023-07-31T13:42:56Z) - Regularizing Self-training for Unsupervised Domain Adaptation via
Structural Constraints [14.593782939242121]
We propose to incorporate structural cues from auxiliary modalities, such as depth, to regularise conventional self-training objectives.
Specifically, we introduce a contrastive pixel-level objectness constraint that pulls the pixel representations within a region of an object instance closer.
We show that our regularizer significantly improves top performing self-training methods in various UDA benchmarks for semantic segmentation.
arXiv Detail & Related papers (2023-04-29T00:12:26Z) - Contrastive Model Adaptation for Cross-Condition Robustness in Semantic
Segmentation [58.17907376475596]
We investigate normal-to-adverse condition model adaptation for semantic segmentation.
Our method -- CMA -- leverages such image pairs to learn condition-invariant features via contrastive learning.
We achieve state-of-the-art semantic segmentation performance for model adaptation on several normal-to-adverse adaptation benchmarks.
arXiv Detail & Related papers (2023-03-09T11:48:29Z) - Distribution Regularized Self-Supervised Learning for Domain Adaptation
of Semantic Segmentation [3.284878354988896]
This paper proposes a pixel-level distribution regularization scheme (DRSL) for self-supervised domain adaptation of semantic segmentation.
In a typical setting, the classification loss forces the semantic segmentation model to greedily learn the representations that capture inter-class variations.
We capture pixel-level intra-class variations through class-aware multi-modal distribution learning.
arXiv Detail & Related papers (2022-06-20T09:52:49Z) - Labeling Where Adapting Fails: Cross-Domain Semantic Segmentation with
Point Supervision via Active Selection [81.703478548177]
Training models dedicated to semantic segmentation require a large amount of pixel-wise annotated data.
Unsupervised domain adaptation approaches aim at aligning the feature distributions between the labeled source and the unlabeled target data.
Previous works attempted to include human interactions in this process under the form of sparse single-pixel annotations in the target data.
We propose a new domain adaptation framework for semantic segmentation with annotated points via active selection.
arXiv Detail & Related papers (2022-06-01T01:52:28Z) - Class-Balanced Pixel-Level Self-Labeling for Domain Adaptive Semantic
Segmentation [31.50802009879241]
Domain adaptive semantic segmentation aims to learn a model with the supervision of source domain data, and produce dense predictions on unlabeled target domain.
One popular solution to this challenging task is self-training, which selects high-scoring predictions on target samples as pseudo labels for training.
We propose to directly explore the intrinsic pixel distributions of target domain data, instead of heavily relying on the source domain.
arXiv Detail & Related papers (2022-03-18T04:56:20Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Domain Adaptive Semantic Segmentation Using Weak Labels [115.16029641181669]
We propose a novel framework for domain adaptation in semantic segmentation with image-level weak labels in the target domain.
We develop a weak-label classification module to enforce the network to attend to certain categories.
In experiments, we show considerable improvements with respect to the existing state-of-the-arts in UDA and present a new benchmark in the WDA setting.
arXiv Detail & Related papers (2020-07-30T01:33:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.