Unsupervised Multiple Domain Translation through Controlled
Disentanglement in Variational Autoencoder
- URL: http://arxiv.org/abs/2401.09180v2
- Date: Thu, 18 Jan 2024 09:51:46 GMT
- Title: Unsupervised Multiple Domain Translation through Controlled
Disentanglement in Variational Autoencoder
- Authors: Antonio Almud\'evar and Th\'eo Mariotte and Alfonso Ortega and Marie
Tahon
- Abstract summary: Unsupervised Multiple Domain Translation is the task of transforming data from one domain to other domains without having paired data to train the systems.
Our proposal relies on a modified version of a Variational Autoencoder.
One of this latent variables is imposed to depend exclusively on the domain, while the other one must depend on the rest of the variability factors of the data.
- Score: 1.7611027732647493
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised Multiple Domain Translation is the task of transforming data
from one domain to other domains without having paired data to train the
systems. Typically, methods based on Generative Adversarial Networks (GANs) are
used to address this task. However, our proposal exclusively relies on a
modified version of a Variational Autoencoder. This modification consists of
the use of two latent variables disentangled in a controlled way by design. One
of this latent variables is imposed to depend exclusively on the domain, while
the other one must depend on the rest of the variability factors of the data.
Additionally, the conditions imposed over the domain latent variable allow for
better control and understanding of the latent space. We empirically
demonstrate that our approach works on different vision datasets improving the
performance of other well known methods. Finally, we prove that, indeed, one of
the latent variables stores all the information related to the domain and the
other one hardly contains any domain information.
Related papers
- Identifiable Latent Causal Content for Domain Adaptation under Latent Covariate Shift [82.14087963690561]
Multi-source domain adaptation (MSDA) addresses the challenge of learning a label prediction function for an unlabeled target domain.
We present an intricate causal generative model by introducing latent noises across domains, along with a latent content variable and a latent style variable.
The proposed approach showcases exceptional performance and efficacy on both simulated and real-world datasets.
arXiv Detail & Related papers (2022-08-30T11:25:15Z) - Joint Attention-Driven Domain Fusion and Noise-Tolerant Learning for
Multi-Source Domain Adaptation [2.734665397040629]
Multi-source Unsupervised Domain Adaptation transfers knowledge from multiple source domains with labeled data to an unlabeled target domain.
The distribution discrepancy between different domains and the noisy pseudo-labels in the target domain both lead to performance bottlenecks.
We propose an approach that integrates Attention-driven Domain fusion and Noise-Tolerant learning (ADNT) to address the two issues mentioned above.
arXiv Detail & Related papers (2022-08-05T01:08:41Z) - Domain Invariant Masked Autoencoders for Self-supervised Learning from
Multi-domains [73.54897096088149]
We propose a Domain-invariant Masked AutoEncoder (DiMAE) for self-supervised learning from multi-domains.
The core idea is to augment the input image with style noise from different domains and then reconstruct the image from the embedding of the augmented image.
Experiments on PACS and DomainNet illustrate that DiMAE achieves considerable gains compared with recent state-of-the-art methods.
arXiv Detail & Related papers (2022-05-10T09:49:40Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Exploiting Both Domain-specific and Invariant Knowledge via a Win-win
Transformer for Unsupervised Domain Adaptation [14.623272346517794]
Unsupervised Domain Adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
Most existing UDA approaches enable knowledge transfer via learning domain-invariant representation and sharing one classifier across two domains.
We propose a Win-Win TRansformer framework (WinTR) that separately explores the domain-specific knowledge for each domain and interchanges cross-domain knowledge.
arXiv Detail & Related papers (2021-11-25T06:45:07Z) - Learning Disentangled Semantic Representation for Domain Adaptation [39.055191615410244]
We aim to extract the domain invariant semantic information in the latent disentangled semantic representation of the data.
Under the above assumption, we employ a variational auto-encoder to reconstruct the semantic latent variables and domain latent variables.
We devise a dual adversarial network to disentangle these two sets of reconstructed latent variables.
arXiv Detail & Related papers (2020-12-22T03:03:36Z) - Semi-Supervised Disentangled Framework for Transferable Named Entity
Recognition [27.472171967604602]
We present a semi-supervised framework for transferable NER, which disentangles the domain-invariant latent variables and domain-specific latent variables.
Our model can obtain state-of-the-art performance with cross-domain and cross-lingual NER benchmark data sets.
arXiv Detail & Related papers (2020-12-22T02:55:04Z) - Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation [67.83872616307008]
Unversarial Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain.
In this paper, we propose a novel Adrial Dual Distincts Network (AD$2$CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries.
To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment.
arXiv Detail & Related papers (2020-08-27T01:29:10Z) - Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation [56.94873619509414]
Conventional unsupervised domain adaptation studies the knowledge transfer between a limited number of domains.
We propose a novel Domain2Vec model to provide vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix.
We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains.
arXiv Detail & Related papers (2020-07-17T22:05:09Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z) - Dual Adversarial Domain Adaptation [6.69797982848003]
Unsupervised domain adaptation aims at transferring knowledge from the labeled source domain to the unlabeled target domain.
Recent experiments have shown that when the discriminator is provided with domain information in both domains, it is able to preserve the complex multimodal information.
We adopt a discriminator with $2K$-dimensional output to perform both domain-level and class-level alignments simultaneously in a single discriminator.
arXiv Detail & Related papers (2020-01-01T07:10:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.