CLUDA : Contrastive Learning in Unsupervised Domain Adaptation for
Semantic Segmentation
- URL: http://arxiv.org/abs/2208.14227v1
- Date: Sat, 27 Aug 2022 05:13:14 GMT
- Title: CLUDA : Contrastive Learning in Unsupervised Domain Adaptation for
Semantic Segmentation
- Authors: Midhun Vayyat, Jaswin Kasi, Anuraag Bhattacharya, Shuaib Ahmed, Rahul
Tallamraju
- Abstract summary: CLUDA is a simple, yet novel method for performing unsupervised domain adaptation (UDA) for semantic segmentation.
We extract a multi-level fused-feature map from the encoder, and apply contrastive loss across different classes and different domains.
We produce state-of-the-art results on GTA $rightarrow$ Cityscapes (74.4 mIOU, +0.6) and Synthia $rightarrow$ Cityscapes (67.2 mIOU, +1.4) datasets.
- Score: 3.4123736336071864
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose CLUDA, a simple, yet novel method for performing
unsupervised domain adaptation (UDA) for semantic segmentation by incorporating
contrastive losses into a student-teacher learning paradigm, that makes use of
pseudo-labels generated from the target domain by the teacher network. More
specifically, we extract a multi-level fused-feature map from the encoder, and
apply contrastive loss across different classes and different domains, via
source-target mixing of images. We consistently improve performance on various
feature encoder architectures and for different domain adaptation datasets in
semantic segmentation. Furthermore, we introduce a learned-weighted contrastive
loss to improve upon on a state-of-the-art multi-resolution training approach
in UDA. We produce state-of-the-art results on GTA $\rightarrow$ Cityscapes
(74.4 mIOU, +0.6) and Synthia $\rightarrow$ Cityscapes (67.2 mIOU, +1.4)
datasets. CLUDA effectively demonstrates contrastive learning in UDA as a
generic method, which can be easily integrated into any existing UDA for
semantic segmentation tasks. Please refer to the supplementary material for the
details on implementation.
Related papers
- C^2DA: Contrastive and Context-aware Domain Adaptive Semantic Segmentation [11.721696305235767]
Unsupervised domain adaptive semantic segmentation (UDA-SS) aims to train a model on the source domain data and adapt the model to predict target domain data.
Most existing UDA-SS methods only focus on inter-domain knowledge to mitigate the data-shift problem.
We propose a UDA-SS framework that learns both intra-domain and context-aware knowledge.
arXiv Detail & Related papers (2024-10-10T15:51:35Z) - MICDrop: Masking Image and Depth Features via Complementary Dropout for Domain-Adaptive Semantic Segmentation [155.0797148367653]
Unsupervised Domain Adaptation (UDA) is the task of bridging the domain gap between a labeled source domain and an unlabeled target domain.
We propose to leverage geometric information, i.e., depth predictions, as depth discontinuities often coincide with segmentation boundaries.
We show that our method can be plugged into various recent UDA methods and consistently improve results across standard UDA benchmarks.
arXiv Detail & Related papers (2024-08-29T12:15:10Z) - Semi-supervised Domain Adaptive Medical Image Segmentation through
Consistency Regularized Disentangled Contrastive Learning [11.049672162852733]
In this work, we investigate relatively less explored semi-supervised domain adaptation (SSDA) for medical image segmentation.
We propose a two-stage training process: first, an encoder is pre-trained in a self-learning paradigm using a novel domain-content disentangled contrastive learning (CL) along with a pixel-level feature consistency constraint.
We experimentally validate and validate our proposed method can easily be extended for UDA settings, adding to the superiority of the proposed strategy.
arXiv Detail & Related papers (2023-07-06T06:13:22Z) - MIC: Masked Image Consistency for Context-Enhanced Domain Adaptation [104.40114562948428]
In unsupervised domain adaptation (UDA), a model trained on source data (e.g. synthetic) is adapted to target data (e.g. real-world) without access to target annotation.
We propose a Masked Image Consistency (MIC) module to enhance UDA by learning spatial context relations of the target domain.
MIC significantly improves the state-of-the-art performance across the different recognition tasks for synthetic-to-real, day-to-nighttime, and clear-to-adverse-weather UDA.
arXiv Detail & Related papers (2022-12-02T17:29:32Z) - PiPa: Pixel- and Patch-wise Self-supervised Learning for Domain
Adaptative Semantic Segmentation [100.6343963798169]
Unsupervised Domain Adaptation (UDA) aims to enhance the generalization of the learned model to other domains.
We propose a unified pixel- and patch-wise self-supervised learning framework, called PiPa, for domain adaptive semantic segmentation.
arXiv Detail & Related papers (2022-11-14T18:31:24Z) - Unsupervised Contrastive Domain Adaptation for Semantic Segmentation [75.37470873764855]
We introduce contrastive learning for feature alignment in cross-domain adaptation.
The proposed approach consistently outperforms state-of-the-art methods for domain adaptation.
It achieves 60.2% mIoU on the Cityscapes dataset.
arXiv Detail & Related papers (2022-04-18T16:50:46Z) - Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with
Self-Supervised Depth Estimation [94.16816278191477]
We present a framework for semi-adaptive and domain-supervised semantic segmentation.
It is enhanced by self-supervised monocular depth estimation trained only on unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset.
arXiv Detail & Related papers (2021-08-28T01:33:38Z) - Exploiting Image Translations via Ensemble Self-Supervised Learning for
Unsupervised Domain Adaptation [0.0]
We introduce an unsupervised domain adaption (UDA) strategy that combines multiple image translations, ensemble learning and self-supervised learning in one coherent approach.
We focus on one of the standard tasks of UDA in which a semantic segmentation model is trained on labeled synthetic data together with unlabeled real-world data.
arXiv Detail & Related papers (2021-07-13T16:43:02Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.