Latent Space Refinement for Deep Generative Models
- URL: http://arxiv.org/abs/2106.00792v1
- Date: Tue, 1 Jun 2021 21:01:39 GMT
- Title: Latent Space Refinement for Deep Generative Models
- Authors: Ramon Winterhalder, Marco Bellagente, Benjamin Nachman
- Abstract summary: We show how latent space refinement via iterated generative modeling can circumvent topological obstructions and improve precision.
We demonstrate our Latent Space Refinement (LaSeR) protocol on a variety of examples, focusing on the combinations of Normalizing Flows and Generative Adversarial Networks.
- Score: 0.4297070083645048
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep generative models are becoming widely used across science and industry
for a variety of purposes. A common challenge is achieving a precise implicit
or explicit representation of the data probability density. Recent proposals
have suggested using classifier weights to refine the learned density of deep
generative models. We extend this idea to all types of generative models and
show how latent space refinement via iterated generative modeling can
circumvent topological obstructions and improve precision. This methodology
also applies to cases were the target model is non-differentiable and has many
internal latent dimensions which must be marginalized over before refinement.
We demonstrate our Latent Space Refinement (LaSeR) protocol on a variety of
examples, focusing on the combinations of Normalizing Flows and Generative
Adversarial Networks.
Related papers
- Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - Learning Generative Models for Lumped Rainfall-Runoff Modeling [3.69758875412828]
This study presents a novel generative modeling approach to rainfall-runoff modeling, focusing on the synthesis of realistic daily catchment runoff time series.
Unlike traditional process-based lumped hydrologic models, our approach uses a small number of latent variables to characterize runoff generation processes.
In this study, we trained the generative models using neural networks on data from over 3,000 global catchments and achieved prediction accuracies comparable to current deep learning models.
arXiv Detail & Related papers (2023-09-18T16:07:41Z) - Precision-Recall Divergence Optimization for Generative Modeling with
GANs and Normalizing Flows [54.050498411883495]
We develop a novel training method for generative models, such as Generative Adversarial Networks and Normalizing Flows.
We show that achieving a specified precision-recall trade-off corresponds to minimizing a unique $f$-divergence from a family we call the textitPR-divergences.
Our approach improves the performance of existing state-of-the-art models like BigGAN in terms of either precision or recall when tested on datasets such as ImageNet.
arXiv Detail & Related papers (2023-05-30T10:07:17Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z) - Unveiling the Latent Space Geometry of Push-Forward Generative Models [24.025975236316846]
Many deep generative models are defined as a push-forward of a Gaussian measure by a continuous generator, such as Generative Adversarial Networks (GANs) or Variational Auto-Encoders (VAEs)
This work explores the latent space of such deep generative models.
A key issue with these models is their tendency to output samples outside of the support of the target distribution when learning disconnected distributions.
arXiv Detail & Related papers (2022-07-21T15:29:35Z) - Diagnosing and Fixing Manifold Overfitting in Deep Generative Models [11.82509693248749]
Likelihood-based, or explicit, deep generative models use neural networks to construct flexible high-dimensional densities.
We show that observed data lies on a low-dimensional manifold embedded in high-dimensional ambient space.
We propose a class of two-step procedures consisting of a dimensionality reduction step followed by maximum-likelihood density estimation.
arXiv Detail & Related papers (2022-04-14T18:00:03Z) - Deep Variational Models for Collaborative Filtering-based Recommender
Systems [63.995130144110156]
Deep learning provides accurate collaborative filtering models to improve recommender system results.
Our proposed models apply the variational concept to injectity in the latent space of the deep architecture.
Results show the superiority of the proposed approach in scenarios where the variational enrichment exceeds the injected noise effect.
arXiv Detail & Related papers (2021-07-27T08:59:39Z) - Characterizing the Latent Space of Molecular Deep Generative Models with
Persistent Homology Metrics [21.95240820041655]
Variational Autos (VAEs) are generative models in which encoder-decoder network pairs are trained to reconstruct training data distributions.
We propose a method for measuring how well the latent space of deep generative models is able to encode structural and chemical features.
arXiv Detail & Related papers (2020-10-18T13:33:02Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.