Unsupervised Domain Adaptation in Semantic Segmentation via Orthogonal
and Clustered Embeddings
- URL: http://arxiv.org/abs/2011.12616v1
- Date: Wed, 25 Nov 2020 10:06:22 GMT
- Title: Unsupervised Domain Adaptation in Semantic Segmentation via Orthogonal
and Clustered Embeddings
- Authors: Marco Toldo, Umberto Michieli, Pietro Zanuttigh
- Abstract summary: We propose an effective Unsupervised Domain Adaptation (UDA) strategy, based on a feature clustering method.
We introduce two novel learning objectives to enhance the discriminative clustering performance.
- Score: 25.137859989323537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning frameworks allowed for a remarkable advancement in semantic
segmentation, but the data hungry nature of convolutional networks has rapidly
raised the demand for adaptation techniques able to transfer learned knowledge
from label-abundant domains to unlabeled ones. In this paper we propose an
effective Unsupervised Domain Adaptation (UDA) strategy, based on a feature
clustering method that captures the different semantic modes of the feature
distribution and groups features of the same class into tight and
well-separated clusters. Furthermore, we introduce two novel learning
objectives to enhance the discriminative clustering performance: an
orthogonality loss forces spaced out individual representations to be
orthogonal, while a sparsity loss reduces class-wise the number of active
feature channels. The joint effect of these modules is to regularize the
structure of the feature space. Extensive evaluations in the synthetic-to-real
scenario show that we achieve state-of-the-art performance.
Related papers
- Instance Adaptive Prototypical Contrastive Embedding for Generalized
Zero Shot Learning [11.720039414872296]
Generalized zero-shot learning aims to classify samples from seen and unseen labels, assuming unseen labels are not accessible during training.
Recent advancements in GZSL have been expedited by incorporating contrastive-learning-based embedding in generative networks.
arXiv Detail & Related papers (2023-09-13T14:26:03Z) - Deep face recognition with clustering based domain adaptation [57.29464116557734]
We propose a new clustering-based domain adaptation method designed for face recognition task in which the source and target domain do not share any classes.
Our method effectively learns the discriminative target feature by aligning the feature domain globally, and, at the meantime, distinguishing the target clusters locally.
arXiv Detail & Related papers (2022-05-27T12:29:11Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Adapting Segmentation Networks to New Domains by Disentangling Latent
Representations [14.050836886292869]
Domain adaptation approaches have come into play to transfer knowledge acquired on a label-abundant source domain to a related label-scarce target domain.
We propose a novel performance metric to capture the relative efficacy of an adaptation strategy compared to supervised training.
arXiv Detail & Related papers (2021-08-06T09:43:07Z) - More Separable and Easier to Segment: A Cluster Alignment Method for
Cross-Domain Semantic Segmentation [41.81843755299211]
We propose a new UDA semantic segmentation approach based on domain assumption closeness to alleviate the above problems.
Specifically, a prototype clustering strategy is applied to cluster pixels with the same semantic, which will better maintain associations among target domain pixels.
Experiments conducted on GTA5 and SYNTHIA proved the effectiveness of our method.
arXiv Detail & Related papers (2021-05-07T10:24:18Z) - Latent Space Regularization for Unsupervised Domain Adaptation in
Semantic Segmentation [14.050836886292869]
We introduce feature-level space-shaping regularization strategies to reduce the domain discrepancy in semantic segmentation.
We verify the effectiveness of such methods in the autonomous driving setting.
arXiv Detail & Related papers (2021-04-06T16:07:22Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Towards Uncovering the Intrinsic Data Structures for Unsupervised Domain
Adaptation using Structurally Regularized Deep Clustering [119.88565565454378]
Unsupervised domain adaptation (UDA) is to learn classification models that make predictions for unlabeled data on a target domain.
We propose a hybrid model of Structurally Regularized Deep Clustering, which integrates the regularized discriminative clustering of target data with a generative one.
Our proposed H-SRDC outperforms all the existing methods under both the inductive and transductive settings.
arXiv Detail & Related papers (2020-12-08T08:52:00Z) - Contradictory Structure Learning for Semi-supervised Domain Adaptation [67.89665267469053]
Current adversarial adaptation methods attempt to align the cross-domain features.
Two challenges remain unsolved: 1) the conditional distribution mismatch and 2) the bias of the decision boundary towards the source domain.
We propose a novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures.
arXiv Detail & Related papers (2020-02-06T22:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.