Variational Autoencoder with Learned Latent Structure
- URL: http://arxiv.org/abs/2006.10597v2
- Date: Tue, 2 Mar 2021 16:53:18 GMT
- Title: Variational Autoencoder with Learned Latent Structure
- Authors: Marissa C. Connor, Gregory H. Canal, Christopher J. Rozell
- Abstract summary: We introduce the Variational Autoencoder with Learned Latent Structure (VAELLS)
VAELLS incorporates a learnable manifold model into the latent space of a VAE.
We validate our model on examples with known latent structure and also demonstrate its capabilities on a real-world dataset.
- Score: 4.41370484305827
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The manifold hypothesis states that high-dimensional data can be modeled as
lying on or near a low-dimensional, nonlinear manifold. Variational
Autoencoders (VAEs) approximate this manifold by learning mappings from
low-dimensional latent vectors to high-dimensional data while encouraging a
global structure in the latent space through the use of a specified prior
distribution. When this prior does not match the structure of the true data
manifold, it can lead to a less accurate model of the data. To resolve this
mismatch, we introduce the Variational Autoencoder with Learned Latent
Structure (VAELLS) which incorporates a learnable manifold model into the
latent space of a VAE. This enables us to learn the nonlinear manifold
structure from the data and use that structure to define a prior in the latent
space. The integration of a latent manifold model not only ensures that our
prior is well-matched to the data, but also allows us to define generative
transformation paths in the latent space and describe class manifolds with
transformations stemming from examples of each class. We validate our model on
examples with known latent structure and also demonstrate its capabilities on a
real-world dataset.
Related papers
- Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - T1: Scaling Diffusion Probabilistic Fields to High-Resolution on Unified
Visual Modalities [69.16656086708291]
Diffusion Probabilistic Field (DPF) models the distribution of continuous functions defined over metric spaces.
We propose a new model comprising of a view-wise sampling algorithm to focus on local structure learning.
The model can be scaled to generate high-resolution data while unifying multiple modalities.
arXiv Detail & Related papers (2023-05-24T03:32:03Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Towards a mathematical understanding of learning from few examples with
nonlinear feature maps [68.8204255655161]
We consider the problem of data classification where the training set consists of just a few data points.
We reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities.
arXiv Detail & Related papers (2022-11-07T14:52:58Z) - Learning from few examples with nonlinear feature maps [68.8204255655161]
We explore the phenomenon and reveal key relationships between dimensionality of AI model's feature space, non-degeneracy of data distributions, and the model's generalisation capabilities.
The main thrust of our present analysis is on the influence of nonlinear feature transformations mapping original data into higher- and possibly infinite-dimensional spaces on the resulting model's generalisation capabilities.
arXiv Detail & Related papers (2022-03-31T10:36:50Z) - Flow Based Models For Manifold Data [11.344428134774475]
Flow-based generative models typically define a latent space with dimensionality identical to the observational space.
In many problems, the data does not populate the full ambient data-space that they reside in, rather a lower-dimensional manifold.
We propose to learn a manifold prior that affords benefits to both sample generation and representation quality.
arXiv Detail & Related papers (2021-09-29T06:48:01Z) - Atlas Generative Models and Geodesic Interpolation [0.20305676256390928]
We define the general class of Atlas Generative Models (AGMs), models with hybrid discrete-continuous latent space.
We exemplify this by generalizing an algorithm for graph based geodesic to the setting of AGMs, and verify its performance experimentally.
arXiv Detail & Related papers (2021-01-30T16:35:25Z) - Geometry-Aware Hamiltonian Variational Auto-Encoder [0.0]
Variational auto-encoders (VAEs) have proven to be a well suited tool for performing dimensionality reduction by extracting latent variables lying in a potentially much smaller dimensional space than the data.
However, such generative models may perform poorly when trained on small data sets which are abundant in many real-life fields such as medicine.
We argue that such latent space modelling provides useful information about its underlying structure leading to far more meaningfuls, more realistic data-generation and more reliable clustering.
arXiv Detail & Related papers (2020-10-22T08:26:46Z) - Generative Model without Prior Distribution Matching [26.91643368299913]
Variational Autoencoder (VAE) and its variations are classic generative models by learning a low-dimensional latent representation to satisfy some prior distribution.
We propose to let the prior match the embedding distribution rather than imposing the latent variables to fit the prior.
arXiv Detail & Related papers (2020-09-23T09:33:24Z) - Learning Bijective Feature Maps for Linear ICA [73.85904548374575]
We show that existing probabilistic deep generative models (DGMs) which are tailor-made for image data, underperform on non-linear ICA tasks.
To address this, we propose a DGM which combines bijective feature maps with a linear ICA model to learn interpretable latent structures for high-dimensional data.
We create models that converge quickly, are easy to train, and achieve better unsupervised latent factor discovery than flow-based models, linear ICA, and Variational Autoencoders on images.
arXiv Detail & Related papers (2020-02-18T17:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.