Unveiling the Latent Space Geometry of Push-Forward Generative Models
- URL: http://arxiv.org/abs/2207.10541v3
- Date: Mon, 15 May 2023 15:39:37 GMT
- Title: Unveiling the Latent Space Geometry of Push-Forward Generative Models
- Authors: Thibaut Issenhuth, Ugo Tanielian, J\'er\'emie Mary, David Picard
- Abstract summary: Many deep generative models are defined as a push-forward of a Gaussian measure by a continuous generator, such as Generative Adversarial Networks (GANs) or Variational Auto-Encoders (VAEs)
This work explores the latent space of such deep generative models.
A key issue with these models is their tendency to output samples outside of the support of the target distribution when learning disconnected distributions.
- Score: 24.025975236316846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many deep generative models are defined as a push-forward of a Gaussian
measure by a continuous generator, such as Generative Adversarial Networks
(GANs) or Variational Auto-Encoders (VAEs). This work explores the latent space
of such deep generative models. A key issue with these models is their tendency
to output samples outside of the support of the target distribution when
learning disconnected distributions. We investigate the relationship between
the performance of these models and the geometry of their latent space.
Building on recent developments in geometric measure theory, we prove a
sufficient condition for optimality in the case where the dimension of the
latent space is larger than the number of modes. Through experiments on GANs,
we demonstrate the validity of our theoretical results and gain new insights
into the latent space geometry of these models. Additionally, we propose a
truncation method that enforces a simplicial cluster structure in the latent
space and improves the performance of GANs.
Related papers
- Geometric Trajectory Diffusion Models [58.853975433383326]
Generative models have shown great promise in generating 3D geometric systems.
Existing approaches only operate on static structures, neglecting the fact that physical systems are always dynamic in nature.
We propose geometric trajectory diffusion models (GeoTDM), the first diffusion model for modeling the temporal distribution of 3D geometric trajectories.
arXiv Detail & Related papers (2024-10-16T20:36:41Z) - Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Geometric Latent Diffusion Models for 3D Molecule Generation [172.15028281732737]
Generative models, especially diffusion models (DMs), have achieved promising results for generating feature-rich geometries.
We propose a novel and principled method for 3D molecule generation named Geometric Latent Diffusion Models (GeoLDM)
arXiv Detail & Related papers (2023-05-02T01:07:22Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Riemannian Score-Based Generative Modeling [56.20669989459281]
We introduce score-based generative models (SGMs) demonstrating remarkable empirical performance.
Current SGMs make the underlying assumption that the data is supported on a Euclidean manifold with flat geometry.
This prevents the use of these models for applications in robotics, geoscience or protein modeling.
arXiv Detail & Related papers (2022-02-06T11:57:39Z) - Latent Space Refinement for Deep Generative Models [0.4297070083645048]
We show how latent space refinement via iterated generative modeling can circumvent topological obstructions and improve precision.
We demonstrate our Latent Space Refinement (LaSeR) protocol on a variety of examples, focusing on the combinations of Normalizing Flows and Generative Adversarial Networks.
arXiv Detail & Related papers (2021-06-01T21:01:39Z) - Data Assimilation Predictive GAN (DA-PredGAN): applied to determine the
spread of COVID-19 [0.0]
We propose the novel use of a generative adversarial network (GAN) to make predictions in time (PredGAN) and to assimilate measurements (DA-PredGAN)
GANs have received much attention recently, after achieving excellent results for their generation of realistic-looking images.
arXiv Detail & Related papers (2021-05-17T10:56:53Z) - Max-Affine Spline Insights into Deep Generative Networks [8.579613053834342]
We connect a large class of Generative Deep Networks (GDNs) with spline operators in order to derive their properties, limitations, and new opportunities.
By characterizing the latent space partition, dimension and angularity of the generated manifold, we relate the manifold dimension and approximation error to the sample size.
We derive the output probability density mapped onto the generated manifold in terms of the latent space density, which enables the computation of key statistics such as its Shannon entropy.
arXiv Detail & Related papers (2020-02-26T00:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.