Learning disentangled representations via product manifold projection
- URL: http://arxiv.org/abs/2103.01638v1
- Date: Tue, 2 Mar 2021 10:59:59 GMT
- Title: Learning disentangled representations via product manifold projection
- Authors: Marco Fumero, Luca Cosmo, Simone Melzi, Emanuele Rodol\`a
- Abstract summary: We propose a novel approach to disentangle the generative factors of variation underlying a given set of observations.
Our method builds upon the idea that the (unknown) low-dimensional manifold underlying the data space can be explicitly modeled as a product of submanifolds.
- Score: 10.677966716893762
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We propose a novel approach to disentangle the generative factors of
variation underlying a given set of observations. Our method builds upon the
idea that the (unknown) low-dimensional manifold underlying the data space can
be explicitly modeled as a product of submanifolds. This gives rise to a new
definition of disentanglement, and to a novel weakly-supervised algorithm for
recovering the unknown explanatory factors behind the data. At training time,
our algorithm only requires pairs of non i.i.d. data samples whose elements
share at least one, possibly multidimensional, generative factor of variation.
We require no knowledge on the nature of these transformations, and do not make
any limiting assumption on the properties of each subspace. Our approach is
easy to implement, and can be successfully applied to different kinds of data
(from images to 3D surfaces) undergoing arbitrary transformations. In addition
to standard synthetic benchmarks, we showcase our method in challenging
real-world applications, where we compare favorably with the state of the art.
Related papers
- Tilt your Head: Activating the Hidden Spatial-Invariance of Classifiers [0.7704032792820767]
Deep neural networks are applied in more and more areas of everyday life.
They still lack essential abilities, such as robustly dealing with spatially transformed input signals.
We propose a novel technique to emulate such an inference process for neural nets.
arXiv Detail & Related papers (2024-05-06T09:47:29Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - GSMFlow: Generation Shifts Mitigating Flow for Generalized Zero-Shot
Learning [55.79997930181418]
Generalized Zero-Shot Learning aims to recognize images from both the seen and unseen classes by transferring semantic knowledge from seen to unseen classes.
It is a promising solution to take the advantage of generative models to hallucinate realistic unseen samples based on the knowledge learned from the seen classes.
We propose a novel flow-based generative framework that consists of multiple conditional affine coupling layers for learning unseen data generation.
arXiv Detail & Related papers (2022-07-05T04:04:37Z) - Inferring Manifolds From Noisy Data Using Gaussian Processes [17.166283428199634]
Most existing manifold learning algorithms replace the original data with lower dimensional coordinates.
This article proposes a new methodology for addressing these problems, allowing the estimated manifold between fitted data points.
arXiv Detail & Related papers (2021-10-14T15:50:38Z) - Mitigating Generation Shifts for Generalized Zero-Shot Learning [52.98182124310114]
Generalized Zero-Shot Learning (GZSL) is the task of leveraging semantic information (e.g., attributes) to recognize the seen and unseen samples, where unseen classes are not observable during training.
We propose a novel Generation Shifts Mitigating Flow framework for learning unseen data synthesis efficiently and effectively.
Experimental results demonstrate that GSMFlow achieves state-of-the-art recognition performance in both conventional and generalized zero-shot settings.
arXiv Detail & Related papers (2021-07-07T11:43:59Z) - Learning Identity-Preserving Transformations on Data Manifolds [14.31845138586011]
Many machine learning techniques incorporate identity-preserving transformations into their models to generalize their performance to previously unseen data.
We develop a learning strategy that does not require transformation labels and develops a method that learns the local regions where each operator is likely to be used.
Experiments on MNIST and Fashion MNIST highlight our model's ability to learn identity-preserving transformations on multi-class datasets.
arXiv Detail & Related papers (2021-06-22T23:10:25Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Normalizing Flows Across Dimensions [10.21537170623373]
We introduce noisy injective flows (NIF), a generalization of normalizing flows that can go across dimensions.
NIF explicitly map the latent space to a learnable manifold in a high-dimensional data space using injective transformations.
Empirically, we demonstrate that a simple application of our method to existing flow architectures can significantly improve sample quality and yield separable data embeddings.
arXiv Detail & Related papers (2020-06-23T14:47:18Z) - Disentangling by Subspace Diffusion [72.1895236605335]
We show that fully unsupervised factorization of a data manifold is possible if the true metric of the manifold is known.
Our work reduces the question of whether unsupervised metric learning is possible, providing a unifying insight into the geometric nature of representation learning.
arXiv Detail & Related papers (2020-06-23T13:33:19Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.