Continual Unsupervised Domain Adaptation for Semantic Segmentation using
a Class-Specific Transfer
- URL: http://arxiv.org/abs/2208.06507v1
- Date: Fri, 12 Aug 2022 21:30:49 GMT
- Title: Continual Unsupervised Domain Adaptation for Semantic Segmentation using
a Class-Specific Transfer
- Authors: Robert A. Marsden, Felix Wiewel, Mario D\"obler, Yang Yang, and Bin
Yang
- Abstract summary: segmentation models do not generalize to unseen domains.
We propose a light-weight style transfer framework that incorporates two class-conditional AdaIN layers.
We extensively validate our approach on a synthetic sequence and further propose a challenging sequence consisting of real domains.
- Score: 9.46677024179954
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, there has been tremendous progress in the field of semantic
segmentation. However, one remaining challenging problem is that segmentation
models do not generalize to unseen domains. To overcome this problem, one
either has to label lots of data covering the whole variety of domains, which
is often infeasible in practice, or apply unsupervised domain adaptation (UDA),
only requiring labeled source data. In this work, we focus on UDA and
additionally address the case of adapting not only to a single domain, but to a
sequence of target domains. This requires mechanisms preventing the model from
forgetting its previously learned knowledge. To adapt a segmentation model to a
target domain, we follow the idea of utilizing light-weight style transfer to
convert the style of labeled source images into the style of the target domain,
while retaining the source content. To mitigate the distributional shift
between the source and the target domain, the model is fine-tuned on the
transferred source images in a second step. Existing light-weight style
transfer approaches relying on adaptive instance normalization (AdaIN) or
Fourier transformation still lack performance and do not substantially improve
upon common data augmentation, such as color jittering. The reason for this is
that these methods do not focus on region- or class-specific differences, but
mainly capture the most salient style. Therefore, we propose a simple and
light-weight framework that incorporates two class-conditional AdaIN layers. To
extract the class-specific target moments needed for the transfer layers, we
use unfiltered pseudo-labels, which we show to be an effective approximation
compared to real labels. We extensively validate our approach (CACE) on a
synthetic sequence and further propose a challenging sequence consisting of
real domains. CACE outperforms existing methods visually and quantitatively.
Related papers
- Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - Domain Adaptation via Prompt Learning [39.97105851723885]
Unsupervised domain adaption (UDA) aims to adapt models learned from a well-annotated source domain to a target domain.
We introduce a novel prompt learning paradigm for UDA, named Domain Adaptation via Prompt Learning (DAPL)
arXiv Detail & Related papers (2022-02-14T13:25:46Z) - Style Mixing and Patchwise Prototypical Matching for One-Shot
Unsupervised Domain Adaptive Semantic Segmentation [21.01132797297286]
In one-shot unsupervised domain adaptation, segmentors only see one unlabeled target image during training.
We propose a new OSUDA method that can effectively relieve such computational burden.
Our method achieves new state-of-the-art performance on two commonly used benchmarks for domain adaptive semantic segmentation.
arXiv Detail & Related papers (2021-12-09T02:47:46Z) - The Norm Must Go On: Dynamic Unsupervised Domain Adaptation by
Normalization [10.274423413222763]
Domain adaptation is crucial to adapt a learned model to new scenarios, such as domain shifts or changing data distributions.
Current approaches usually require a large amount of labeled or unlabeled data from the shifted domain.
We propose Dynamic Unsupervised Adaptation (DUA) to overcome this problem.
arXiv Detail & Related papers (2021-12-01T12:43:41Z) - Domain Adaptive Semantic Segmentation without Source Data [50.18389578589789]
We investigate domain adaptive semantic segmentation without source data, which assumes that the model is pre-trained on the source domain.
We propose an effective framework for this challenging problem with two components: positive learning and negative learning.
Our framework can be easily implemented and incorporated with other methods to further enhance the performance.
arXiv Detail & Related papers (2021-10-13T04:12:27Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Contradistinguisher: A Vapnik's Imperative to Unsupervised Domain
Adaptation [7.538482310185133]
We propose a model referred Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way.
We achieve the state-of-the-art on Office-31 and VisDA-2017 datasets in both single-source and multi-source settings.
arXiv Detail & Related papers (2020-05-25T19:54:38Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.