Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation
- URL: http://arxiv.org/abs/2105.02001v1
- Date: Wed, 5 May 2021 11:55:53 GMT
- Title: Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation
- Authors: Robert A. Marsden, Alexander Bartler, Mario D\"obler, Bin Yang
- Abstract summary: UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
- Score: 71.77083272602525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional neural networks have considerably improved
state-of-the-art results for semantic segmentation. Nevertheless, even modern
architectures lack the ability to generalize well to a test dataset that
originates from a different domain. To avoid the costly annotation of training
data for unseen domains, unsupervised domain adaptation (UDA) attempts to
provide efficient knowledge transfer from a labeled source domain to an
unlabeled target domain. Previous work has mainly focused on minimizing the
discrepancy between the two domains by using adversarial training or
self-training. While adversarial training may fail to align the correct
semantic categories as it minimizes the discrepancy between the global
distributions, self-training raises the question of how to provide reliable
pseudo-labels. To align the correct semantic categories across domains, we
propose a contrastive learning approach that adapts category-wise centroids
across domains. Furthermore, we extend our method with self-training, where we
use a memory-efficient temporal ensemble to generate consistent and reliable
pseudo-labels. Although both contrastive learning and self-training (CLST)
through temporal ensembling enable knowledge transfer between two domains, it
is their combination that leads to a symbiotic structure. We validate our
approach on two domain adaptation benchmarks: GTA5 $\rightarrow$ Cityscapes and
SYNTHIA $\rightarrow$ Cityscapes. Our method achieves better or comparable
results than the state-of-the-art. We will make the code publicly available.
Related papers
- Test-Time Domain Adaptation by Learning Domain-Aware Batch Normalization [39.14048972373775]
Test-time domain adaptation aims to adapt the model trained on source domains to unseen target domains using a few unlabeled images.
Previous works normally update the whole network naively without explicitly decoupling the knowledge between label and domain.
We propose to reduce such learning interference and elevate the domain knowledge learning by only manipulating the BN layer.
arXiv Detail & Related papers (2023-12-15T19:22:21Z) - Unsupervised Domain Adaptation for Anatomical Landmark Detection [5.070344284426738]
We propose a novel framework for anatomical landmark detection under the setting of unsupervised domain adaptation (UDA)
The framework leverages self-training and domain adversarial learning to address the domain gap during adaptation.
Our experiments on cephalometric and lung landmark detection show the effectiveness of the method, which reduces the domain gap by a large margin and outperforms other UDA methods consistently.
arXiv Detail & Related papers (2023-08-25T10:22:13Z) - CDA: Contrastive-adversarial Domain Adaptation [11.354043674822451]
We propose a two-stage model for domain adaptation called textbfContrastive-adversarial textbfDomain textbfAdaptation textbf(CDA).
While the adversarial component facilitates domain-level alignment, two-stage contrastive learning exploits class information to achieve higher intra-class compactness across domains.
arXiv Detail & Related papers (2023-01-10T07:43:21Z) - Boosting Cross-Domain Speech Recognition with Self-Supervision [35.01508881708751]
Cross-domain performance of automatic speech recognition (ASR) could be severely hampered due to mismatch between training and testing distributions.
Previous work has shown that self-supervised learning (SSL) or pseudo-labeling (PL) is effective in UDA by exploiting the self-supervisions of unlabeled data.
This work presents a systematic UDA framework to fully utilize the unlabeled data with self-supervision in the pre-training and fine-tuning paradigm.
arXiv Detail & Related papers (2022-06-20T14:02:53Z) - Domain Adaptive Semantic Segmentation without Source Data [50.18389578589789]
We investigate domain adaptive semantic segmentation without source data, which assumes that the model is pre-trained on the source domain.
We propose an effective framework for this challenging problem with two components: positive learning and negative learning.
Our framework can be easily implemented and incorporated with other methods to further enhance the performance.
arXiv Detail & Related papers (2021-10-13T04:12:27Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Prototypical Cross-domain Self-supervised Learning for Few-shot
Unsupervised Domain Adaptation [91.58443042554903]
We propose an end-to-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA)
PCS not only performs cross-domain low-level feature alignment, but it also encodes and aligns semantic structures in the shared embedding space across domains.
Compared with state-of-the-art methods, PCS improves the mean classification accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on Office, Office-Home, VisDA-2017, and DomainNet, respectively.
arXiv Detail & Related papers (2021-03-31T02:07:42Z) - Deep Co-Training with Task Decomposition for Semi-Supervised Domain
Adaptation [80.55236691733506]
Semi-supervised domain adaptation (SSDA) aims to adapt models trained from a labeled source domain to a different but related target domain.
We propose to explicitly decompose the SSDA task into two sub-tasks: a semi-supervised learning (SSL) task in the target domain and an unsupervised domain adaptation (UDA) task across domains.
arXiv Detail & Related papers (2020-07-24T17:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.