Unsupervised Disentanglement with Tensor Product Representations on the
Torus
- URL: http://arxiv.org/abs/2202.06201v1
- Date: Sun, 13 Feb 2022 04:23:12 GMT
- Title: Unsupervised Disentanglement with Tensor Product Representations on the
Torus
- Authors: Michael Rotman, Amit Dekel, Shir Gur, Yaron Oz, Lior Wolf
- Abstract summary: Current methods for learning representations with auto-encoders almost exclusively employ vectors as the latent representations.
In this work, we propose to employ a tensor product structure for this purpose.
In contrast to the conventional variations methods, which are targeted toward normally distributed features, the latent space in our representation is distributed uniformly over a set of unit circles.
- Score: 78.6315881294899
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The current methods for learning representations with auto-encoders almost
exclusively employ vectors as the latent representations. In this work, we
propose to employ a tensor product structure for this purpose. This way, the
obtained representations are naturally disentangled. In contrast to the
conventional variations methods, which are targeted toward normally distributed
features, the latent space in our representation is distributed uniformly over
a set of unit circles. We argue that the torus structure of the latent space
captures the generative factors effectively. We employ recent tools for
measuring unsupervised disentanglement, and in an extensive set of experiments
demonstrate the advantage of our method in terms of disentanglement,
completeness, and informativeness. The code for our proposed method is
available at https://github.com/rotmanmi/Unsupervised-Disentanglement-Torus.
Related papers
- Disentangling Disentangled Representations: Towards Improved Latent Units via Diffusion Models [3.1923251959845214]
Disentangled representation learning (DRL) aims to break down observed data into core intrinsic factors for a profound understanding of the data.
Recently, there have been limited explorations of utilizing diffusion models (DMs) for unsupervised DRL.
We propose Dynamic Gaussian Anchoring to enforce attribute-separated latent units for more interpretable DRL.
We also propose Skip Dropout technique, which easily modifies the denoising U-Net to be more DRL-friendly.
arXiv Detail & Related papers (2024-10-31T11:05:09Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Orthogonal Jacobian Regularization for Unsupervised Disentanglement in
Image Generation [64.92152574895111]
We propose a simple Orthogonal Jacobian Regularization (OroJaR) to encourage deep generative model to learn disentangled representations.
Our method is effective in disentangled and controllable image generation, and performs favorably against the state-of-the-art methods.
arXiv Detail & Related papers (2021-08-17T15:01:46Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.