Style Mixing and Patchwise Prototypical Matching for One-Shot
Unsupervised Domain Adaptive Semantic Segmentation
- URL: http://arxiv.org/abs/2112.04665v1
- Date: Thu, 9 Dec 2021 02:47:46 GMT
- Title: Style Mixing and Patchwise Prototypical Matching for One-Shot
Unsupervised Domain Adaptive Semantic Segmentation
- Authors: Xinyi Wu and Zhenyao Wu and Yuhang Lu and Lili Ju and Song Wang
- Abstract summary: In one-shot unsupervised domain adaptation, segmentors only see one unlabeled target image during training.
We propose a new OSUDA method that can effectively relieve such computational burden.
Our method achieves new state-of-the-art performance on two commonly used benchmarks for domain adaptive semantic segmentation.
- Score: 21.01132797297286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we tackle the problem of one-shot unsupervised domain
adaptation (OSUDA) for semantic segmentation where the segmentors only see one
unlabeled target image during training. In this case, traditional unsupervised
domain adaptation models usually fail since they cannot adapt to the target
domain with over-fitting to one (or few) target samples. To address this
problem, existing OSUDA methods usually integrate a style-transfer module to
perform domain randomization based on the unlabeled target sample, with which
multiple domains around the target sample can be explored during training.
However, such a style-transfer module relies on an additional set of images as
style reference for pre-training and also increases the memory demand for
domain adaptation. Here we propose a new OSUDA method that can effectively
relieve such computational burden. Specifically, we integrate several
style-mixing layers into the segmentor which play the role of style-transfer
module to stylize the source images without introducing any learned parameters.
Moreover, we propose a patchwise prototypical matching (PPM) method to weighted
consider the importance of source pixels during the supervised training to
relieve the negative adaptation. Experimental results show that our method
achieves new state-of-the-art performance on two commonly used benchmarks for
domain adaptive semantic segmentation under the one-shot setting and is more
efficient than all comparison approaches.
Related papers
- Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation [72.70876977882882]
Domain shift is a common problem in clinical applications, where the training images (source domain) and the test images (target domain) are under different distributions.
We propose a novel method for Few-Shot Unsupervised Domain Adaptation (FSUDA), where only a limited number of unlabeled target domain samples are available for training.
arXiv Detail & Related papers (2023-09-03T16:02:01Z) - I2F: A Unified Image-to-Feature Approach for Domain Adaptive Semantic
Segmentation [55.633859439375044]
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work.
Key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly.
This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation.
arXiv Detail & Related papers (2023-01-03T15:19:48Z) - Unsupervised Domain Adaptation for Semantic Segmentation using One-shot
Image-to-Image Translation via Latent Representation Mixing [9.118706387430883]
We propose a new unsupervised domain adaptation method for the semantic segmentation of very high resolution images.
An image-to-image translation paradigm is proposed, based on an encoder-decoder principle where latent content representations are mixed across domains.
Cross-city comparative experiments have shown that the proposed method outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-12-07T18:16:17Z) - Continual Unsupervised Domain Adaptation for Semantic Segmentation using
a Class-Specific Transfer [9.46677024179954]
segmentation models do not generalize to unseen domains.
We propose a light-weight style transfer framework that incorporates two class-conditional AdaIN layers.
We extensively validate our approach on a synthetic sequence and further propose a challenging sequence consisting of real domains.
arXiv Detail & Related papers (2022-08-12T21:30:49Z) - Domain-invariant Prototypes for Semantic Segmentation [30.932130453313537]
We present an easy-to-train framework that learns domain-invariant prototypes for domain adaptive semantic segmentation.
Our method involves only one-stage training and does not need to be trained on large-scale un-annotated target images.
arXiv Detail & Related papers (2022-08-12T02:21:05Z) - Labeling Where Adapting Fails: Cross-Domain Semantic Segmentation with
Point Supervision via Active Selection [81.703478548177]
Training models dedicated to semantic segmentation require a large amount of pixel-wise annotated data.
Unsupervised domain adaptation approaches aim at aligning the feature distributions between the labeled source and the unlabeled target data.
Previous works attempted to include human interactions in this process under the form of sparse single-pixel annotations in the target data.
We propose a new domain adaptation framework for semantic segmentation with annotated points via active selection.
arXiv Detail & Related papers (2022-06-01T01:52:28Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Pixel-Level Cycle Association: A New Perspective for Domain Adaptive
Semantic Segmentation [169.82760468633236]
We propose to build the pixel-level cycle association between source and target pixel pairs.
Our method can be trained end-to-end in one stage and introduces no additional parameters.
arXiv Detail & Related papers (2020-10-31T00:11:36Z) - Adversarial Style Mining for One-Shot Unsupervised Domain Adaptation [43.351728923472464]
One-Shot Unsupervised Domain Adaptation assumes that only one unlabeled target sample can be available when learning to adapt.
Traditional adaptation approaches are prone to failure due to the scarce of unlabeled target data.
We propose a novel Adrial Style Mining approach, which combines the style transfer module and task-specific module into an adversarial manner.
arXiv Detail & Related papers (2020-04-13T16:18:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.