Max-Affine Spline Insights into Deep Generative Networks
- URL: http://arxiv.org/abs/2002.11912v1
- Date: Wed, 26 Feb 2020 00:20:02 GMT
- Title: Max-Affine Spline Insights into Deep Generative Networks
- Authors: Randall Balestriero, Sebastien Paris, Richard Baraniuk
- Abstract summary: We connect a large class of Generative Deep Networks (GDNs) with spline operators in order to derive their properties, limitations, and new opportunities.
By characterizing the latent space partition, dimension and angularity of the generated manifold, we relate the manifold dimension and approximation error to the sample size.
We derive the output probability density mapped onto the generated manifold in terms of the latent space density, which enables the computation of key statistics such as its Shannon entropy.
- Score: 8.579613053834342
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We connect a large class of Generative Deep Networks (GDNs) with spline
operators in order to derive their properties, limitations, and new
opportunities. By characterizing the latent space partition, dimension and
angularity of the generated manifold, we relate the manifold dimension and
approximation error to the sample size. The manifold-per-region affine subspace
defines a local coordinate basis; we provide necessary and sufficient
conditions relating those basis vectors with disentanglement. We also derive
the output probability density mapped onto the generated manifold in terms of
the latent space density, which enables the computation of key statistics such
as its Shannon entropy. This finding also enables the computation of the GDN
likelihood, which provides a new mechanism for model comparison as well as
providing a quality measure for (generated) samples under the learned
distribution. We demonstrate how low entropy and/or multimodal distributions
are not naturally modeled by DGNs and are a cause of training instabilities.
Related papers
- A Likelihood Based Approach to Distribution Regression Using Conditional Deep Generative Models [6.647819824559201]
We study the large-sample properties of a likelihood-based approach for estimating conditional deep generative models.
Our results lead to the convergence rate of a sieve maximum likelihood estimator for estimating the conditional distribution.
arXiv Detail & Related papers (2024-10-02T20:46:21Z) - Adaptive Learning of the Latent Space of Wasserstein Generative Adversarial Networks [7.958528596692594]
We propose a novel framework called the latent Wasserstein GAN (LWGAN)
It fuses the Wasserstein auto-encoder and the Wasserstein GAN so that the intrinsic dimension of the data manifold can be adaptively learned.
We show that LWGAN is able to identify the correct intrinsic dimension under several scenarios.
arXiv Detail & Related papers (2024-09-27T01:25:22Z) - Score-based generative models break the curse of dimensionality in
learning a family of sub-Gaussian probability distributions [5.801621787540268]
We introduce a notion of complexity for probability distributions in terms of their relative density with respect to the standard Gaussian measure.
We prove that if the log-relative density can be locally approximated by a neural network whose parameters can be suitably bounded, then the distribution generated by empirical score matching approximates the target distribution.
An essential ingredient of our proof is to derive a dimension-free deep neural network approximation rate for the true score function associated with the forward process.
arXiv Detail & Related papers (2024-02-12T22:02:23Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Optimal Scaling for Locally Balanced Proposals in Discrete Spaces [65.14092237705476]
We show that efficiency of Metropolis-Hastings (M-H) algorithms in discrete spaces can be characterized by an acceptance rate that is independent of the target distribution.
Knowledge of the optimal acceptance rate allows one to automatically tune the neighborhood size of a proposal distribution in a discrete space, directly analogous to step-size control in continuous spaces.
arXiv Detail & Related papers (2022-09-16T22:09:53Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - ManiFlow: Implicitly Representing Manifolds with Normalizing Flows [145.9820993054072]
Normalizing Flows (NFs) are flexible explicit generative models that have been shown to accurately model complex real-world data distributions.
We propose an optimization objective that recovers the most likely point on the manifold given a sample from the perturbed distribution.
Finally, we focus on 3D point clouds for which we utilize the explicit nature of NFs, i.e. surface normals extracted from the gradient of the log-likelihood and the log-likelihood itself.
arXiv Detail & Related papers (2022-08-18T16:07:59Z) - Unveiling the Latent Space Geometry of Push-Forward Generative Models [24.025975236316846]
Many deep generative models are defined as a push-forward of a Gaussian measure by a continuous generator, such as Generative Adversarial Networks (GANs) or Variational Auto-Encoders (VAEs)
This work explores the latent space of such deep generative models.
A key issue with these models is their tendency to output samples outside of the support of the target distribution when learning disconnected distributions.
arXiv Detail & Related papers (2022-07-21T15:29:35Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - A Note on Optimizing Distributions using Kernel Mean Embeddings [94.96262888797257]
Kernel mean embeddings represent probability measures by their infinite-dimensional mean embeddings in a reproducing kernel Hilbert space.
We show that when the kernel is characteristic, distributions with a kernel sum-of-squares density are dense.
We provide algorithms to optimize such distributions in the finite-sample setting.
arXiv Detail & Related papers (2021-06-18T08:33:45Z) - A likelihood approach to nonparametric estimation of a singular
distribution using deep generative models [4.329951775163721]
We investigate a likelihood approach to nonparametric estimation of a singular distribution using deep generative models.
We prove that a novel and effective solution exists by perturbing the data with an instance noise.
We also characterize the class of distributions that can be efficiently estimated via deep generative models.
arXiv Detail & Related papers (2021-05-09T23:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.