Atlas Generative Models and Geodesic Interpolation
- URL: http://arxiv.org/abs/2102.00264v1
- Date: Sat, 30 Jan 2021 16:35:25 GMT
- Title: Atlas Generative Models and Geodesic Interpolation
- Authors: Jakob Stolberg-Larsen, Stefan Sommer
- Abstract summary: We define the general class of Atlas Generative Models (AGMs), models with hybrid discrete-continuous latent space.
We exemplify this by generalizing an algorithm for graph based geodesic to the setting of AGMs, and verify its performance experimentally.
- Score: 0.20305676256390928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative neural networks have a well recognized ability to estimate
underlying manifold structure of high dimensional data. However, if a simply
connected latent space is used, it is not possible to faithfully represent a
manifold with non-trivial homotopy type. In this work we define the general
class of Atlas Generative Models (AGMs), models with hybrid discrete-continuous
latent space that estimate an atlas on the underlying data manifold together
with a partition of unity on the data space. We identify existing examples of
models from various popular generative paradigms that fit into this class. Due
to the atlas interpretation, ideas from non-linear latent space analysis and
statistics, e.g. geodesic interpolation, which has previously only been
investigated for models with simply connected latent spaces, may be extended to
the entire class of AGMs in a natural way. We exemplify this by generalizing an
algorithm for graph based geodesic interpolation to the setting of AGMs, and
verify its performance experimentally.
Related papers
- Understanding the Local Geometry of Generative Model Manifolds [14.191548577311904]
We study the relationship between the textitlocal geometry of the learned manifold and downstream generation.
We provide quantitative and qualitative evidence showing that for a given latent, the local descriptors are correlated with generation aesthetics, artifacts, uncertainty, and even memorization.
arXiv Detail & Related papers (2024-08-15T17:59:06Z) - (Deep) Generative Geodesics [57.635187092922976]
We introduce a newian metric to assess the similarity between any two data points.
Our metric leads to the conceptual definition of generative distances and generative geodesics.
Their approximations are proven to converge to their true values under mild conditions.
arXiv Detail & Related papers (2024-07-15T21:14:02Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Towards a mathematical understanding of learning from few examples with
nonlinear feature maps [68.8204255655161]
We consider the problem of data classification where the training set consists of just a few data points.
We reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities.
arXiv Detail & Related papers (2022-11-07T14:52:58Z) - Unveiling the Latent Space Geometry of Push-Forward Generative Models [24.025975236316846]
Many deep generative models are defined as a push-forward of a Gaussian measure by a continuous generator, such as Generative Adversarial Networks (GANs) or Variational Auto-Encoders (VAEs)
This work explores the latent space of such deep generative models.
A key issue with these models is their tendency to output samples outside of the support of the target distribution when learning disconnected distributions.
arXiv Detail & Related papers (2022-07-21T15:29:35Z) - Riemannian Score-Based Generative Modeling [56.20669989459281]
We introduce score-based generative models (SGMs) demonstrating remarkable empirical performance.
Current SGMs make the underlying assumption that the data is supported on a Euclidean manifold with flat geometry.
This prevents the use of these models for applications in robotics, geoscience or protein modeling.
arXiv Detail & Related papers (2022-02-06T11:57:39Z) - Variational Autoencoder with Learned Latent Structure [4.41370484305827]
We introduce the Variational Autoencoder with Learned Latent Structure (VAELLS)
VAELLS incorporates a learnable manifold model into the latent space of a VAE.
We validate our model on examples with known latent structure and also demonstrate its capabilities on a real-world dataset.
arXiv Detail & Related papers (2020-06-18T14:59:06Z) - Learning Bijective Feature Maps for Linear ICA [73.85904548374575]
We show that existing probabilistic deep generative models (DGMs) which are tailor-made for image data, underperform on non-linear ICA tasks.
To address this, we propose a DGM which combines bijective feature maps with a linear ICA model to learn interpretable latent structures for high-dimensional data.
We create models that converge quickly, are easy to train, and achieve better unsupervised latent factor discovery than flow-based models, linear ICA, and Variational Autoencoders on images.
arXiv Detail & Related papers (2020-02-18T17:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.