Unsupervised Disentanglement with Tensor Product Representations on the
Torus
- URL: http://arxiv.org/abs/2202.06201v1
- Date: Sun, 13 Feb 2022 04:23:12 GMT
- Title: Unsupervised Disentanglement with Tensor Product Representations on the
Torus
- Authors: Michael Rotman, Amit Dekel, Shir Gur, Yaron Oz, Lior Wolf
- Abstract summary: Current methods for learning representations with auto-encoders almost exclusively employ vectors as the latent representations.
In this work, we propose to employ a tensor product structure for this purpose.
In contrast to the conventional variations methods, which are targeted toward normally distributed features, the latent space in our representation is distributed uniformly over a set of unit circles.
- Score: 78.6315881294899
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The current methods for learning representations with auto-encoders almost
exclusively employ vectors as the latent representations. In this work, we
propose to employ a tensor product structure for this purpose. This way, the
obtained representations are naturally disentangled. In contrast to the
conventional variations methods, which are targeted toward normally distributed
features, the latent space in our representation is distributed uniformly over
a set of unit circles. We argue that the torus structure of the latent space
captures the generative factors effectively. We employ recent tools for
measuring unsupervised disentanglement, and in an extensive set of experiments
demonstrate the advantage of our method in terms of disentanglement,
completeness, and informativeness. The code for our proposed method is
available at https://github.com/rotmanmi/Unsupervised-Disentanglement-Torus.
Related papers
- Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Orthogonal Jacobian Regularization for Unsupervised Disentanglement in
Image Generation [64.92152574895111]
We propose a simple Orthogonal Jacobian Regularization (OroJaR) to encourage deep generative model to learn disentangled representations.
Our method is effective in disentangled and controllable image generation, and performs favorably against the state-of-the-art methods.
arXiv Detail & Related papers (2021-08-17T15:01:46Z) - LARGE: Latent-Based Regression through GAN Semantics [42.50535188836529]
We propose a novel method for solving regression tasks using few-shot or weak supervision.
We show that our method can be applied across a wide range of domains, leverage multiple latent direction discovery frameworks, and achieve state-of-the-art results.
arXiv Detail & Related papers (2021-07-22T17:55:35Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.