Integrating Categorical Semantics into Unsupervised Domain Translation
- URL: http://arxiv.org/abs/2010.01262v2
- Date: Tue, 16 Mar 2021 22:10:11 GMT
- Title: Integrating Categorical Semantics into Unsupervised Domain Translation
- Authors: Samuel Lavoie, Faruk Ahmed, Aaron Courville
- Abstract summary: We propose a method to learn, in an unsupervised manner, categorical semantic features that are invariant of the source and target domains.
We show that conditioning the style encoder of unsupervised domain translation methods on the learned categorical semantics leads to a translation preserving the digits on MNIST$leftrightarrow$SVHN.
- Score: 6.853826783413853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While unsupervised domain translation (UDT) has seen a lot of success
recently, we argue that mediating its translation via categorical semantic
features could broaden its applicability. In particular, we demonstrate that
categorical semantics improves the translation between perceptually different
domains sharing multiple object categories. We propose a method to learn, in an
unsupervised manner, categorical semantic features (such as object labels) that
are invariant of the source and target domains. We show that conditioning the
style encoder of unsupervised domain translation methods on the learned
categorical semantics leads to a translation preserving the digits on
MNIST$\leftrightarrow$SVHN and to a more realistic stylization on
Sketches$\to$Reals.
Related papers
- Domain-Agnostic Mutual Prompting for Unsupervised Domain Adaptation [27.695825570272874]
Conventional Unsupervised Domain Adaptation (UDA) strives to minimize distribution discrepancy between domains.
We propose Domain-Agnostic Mutual Prompting (DAMP) to exploit domain-invariant semantics.
Experiments on three UDA benchmarks demonstrate the superiority of DAMP over state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-05T12:06:48Z) - One-for-All: Towards Universal Domain Translation with a Single StyleGAN [86.33216867136639]
We propose a novel translation model, UniTranslator, for transforming representations between visually distinct domains.
The proposed UniTranslator is versatile and capable of performing various tasks, including style mixing, stylization, and translations.
UniTranslator surpasses the performance of existing general-purpose models and performs well against specialized models in representative tasks.
arXiv Detail & Related papers (2023-10-22T08:02:55Z) - Semantic Consistency in Image-to-Image Translation for Unsupervised
Domain Adaptation [22.269565708490465]
Unsupervised Domain Adaptation (UDA) aims to adapt models trained on a source domain to a new target domain where no labelled data is available.
We propose a semantically consistent image-to-image translation method in combination with a consistency regularisation method for UDA.
arXiv Detail & Related papers (2021-11-05T14:22:20Z) - Structured Latent Embeddings for Recognizing Unseen Classes in Unseen
Domains [108.11746235308046]
We propose a novel approach that learns domain-agnostic structured latent embeddings by projecting images from different domains.
Our experiments on the challenging DomainNet and DomainNet-LS benchmarks show the superiority of our approach over existing methods.
arXiv Detail & Related papers (2021-07-12T17:57:46Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Get away from Style: Category-Guided Domain Adaptation for Semantic
Segmentation [15.002381934551359]
Unsupervised domain adaptation (UDA) becomes more and more popular in tackling real-world problems without ground truth of the target domain.
In this paper, we focus on UDA for semantic segmentation task.
We propose a style-independent content feature extraction mechanism to keep the style information of extracted features in the similar space.
Secondly, to keep the balance of pseudo labels on each category, we propose a category-guided threshold mechanism to choose category-wise pseudo labels for self-supervised learning.
arXiv Detail & Related papers (2021-03-29T10:00:50Z) - Continuous and Diverse Image-to-Image Translation via Signed Attribute
Vectors [120.13149176992896]
We present an effectively signed attribute vector, which enables continuous translation on diverse mapping paths across various domains.
To enhance the visual quality of continuous translation results, we generate a trajectory between two sign-symmetrical attribute vectors.
arXiv Detail & Related papers (2020-11-02T18:59:03Z) - Classes Matter: A Fine-grained Adversarial Approach to Cross-domain
Semantic Segmentation [95.10255219396109]
We propose a fine-grained adversarial learning strategy for class-level feature alignment.
We adopt a fine-grained domain discriminator that not only plays as a domain distinguisher, but also differentiates domains at class level.
An analysis with Class Center Distance (CCD) validates that our fine-grained adversarial strategy achieves better class-level alignment.
arXiv Detail & Related papers (2020-07-17T20:50:59Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.