Attribute-based Regularization of Latent Spaces for Variational
Auto-Encoders
- URL: http://arxiv.org/abs/2004.05485v3
- Date: Wed, 29 Jul 2020 01:16:24 GMT
- Title: Attribute-based Regularization of Latent Spaces for Variational
Auto-Encoders
- Authors: Ashis Pati, Alexander Lerch
- Abstract summary: We present a novel method to structure the latent space of a Variational Auto-Encoder (VAE) to encode different continuous-valued attributes explicitly.
This is accomplished by using an attribute regularization loss which enforces a monotonic relationship between the attribute values and the latent code of the dimension along which the attribute is to be encoded.
- Score: 79.68916470119743
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Selective manipulation of data attributes using deep generative models is an
active area of research. In this paper, we present a novel method to structure
the latent space of a Variational Auto-Encoder (VAE) to encode different
continuous-valued attributes explicitly. This is accomplished by using an
attribute regularization loss which enforces a monotonic relationship between
the attribute values and the latent code of the dimension along which the
attribute is to be encoded. Consequently, post-training, the model can be used
to manipulate the attribute by simply changing the latent code of the
corresponding regularized dimension. The results obtained from several
quantitative and qualitative experiments show that the proposed method leads to
disentangled and interpretable latent spaces that can be used to effectively
manipulate a wide range of data attributes spanning image and symbolic music
domains.
Related papers
- Text Attribute Control via Closed-Loop Disentanglement [72.2786244367634]
We propose a novel approach to achieve a robust control of attributes while enhancing content preservation.
In this paper, we use a semi-supervised contrastive learning method to encourage the disentanglement of attributes in latent spaces.
We conducted experiments on three text datasets, including the Yelp Service review dataset, the Amazon Product review dataset, and the GoEmotions dataset.
arXiv Detail & Related papers (2023-12-01T01:26:38Z) - Exploring Attribute Variations in Style-based GANs using Diffusion
Models [48.98081892627042]
We formulate the task of textitdiverse attribute editing by modeling the multidimensional nature of attribute edits.
We capitalize on disentangled latent spaces of pretrained GANs and train a Denoising Diffusion Probabilistic Model (DDPM) to learn the latent distribution for diverse edits.
arXiv Detail & Related papers (2023-11-27T18:14:03Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Attri-VAE: attribute-based, disentangled and interpretable
representations of medical images with variational autoencoders [0.5451140334681147]
We propose a VAE approach that includes an attribute regularization term to associate clinical and medical imaging attributes with different regularized dimensions in the generated latent space.
The proposed model provided an excellent trade-off between reconstruction fidelity, disentanglement, and interpretability, outperforming state-of-the-art VAE approaches.
arXiv Detail & Related papers (2022-03-20T00:19:40Z) - Learning Conditional Invariance through Cycle Consistency [60.85059977904014]
We propose a novel approach to identify meaningful and independent factors of variation in a dataset.
Our method involves two separate latent subspaces for the target property and the remaining input information.
We demonstrate on synthetic and molecular data that our approach identifies more meaningful factors which lead to sparser and more interpretable models.
arXiv Detail & Related papers (2021-11-25T17:33:12Z) - Multi-Attribute Balanced Sampling for Disentangled GAN Controls [0.0]
Various controls over the generated data can be extracted from the latent space of a pre-trained GAN.
We show that this approach outperforms state-of-the-art classifier-based methods while avoiding the need for disentanglement-enforcing post-processing.
arXiv Detail & Related papers (2021-10-28T08:44:13Z) - Disentangled Face Attribute Editing via Instance-Aware Latent Space
Search [30.17338705964925]
A rich set of semantic directions exist in the latent space of Generative Adversarial Networks (GANs)
Existing methods may suffer poor attribute variation disentanglement, leading to unwanted change of other attributes when altering the desired one.
We propose a novel framework (IALS) that performs Instance-Aware Latent-Space Search to find semantic directions for disentangled attribute editing.
arXiv Detail & Related papers (2021-05-26T16:19:08Z) - Improve Variational Autoencoder for Text Generationwith Discrete Latent
Bottleneck [52.08901549360262]
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
VAEs tend to ignore latent variables with a strong auto-regressive decoder.
We propose a principled approach to enforce an implicit latent feature matching in a more compact latent space.
arXiv Detail & Related papers (2020-04-22T14:41:37Z) - MulGAN: Facial Attribute Editing by Exemplar [2.272764591035106]
Methods encode attribute-related information in images into the predefined region of the latent feature space by employing a pair of images with opposite attributes as input to train model.
They suffer from three limitations: (1) the model must be trained using a pair of images with opposite attributes as input; (2) weak capability of editing multiple attributes by exemplars; and (3) poor quality of generating image.
arXiv Detail & Related papers (2019-12-28T04:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.