Normalizing Flows Across Dimensions
- URL: http://arxiv.org/abs/2006.13070v1
- Date: Tue, 23 Jun 2020 14:47:18 GMT
- Title: Normalizing Flows Across Dimensions
- Authors: Edmond Cunningham, Renos Zabounidis, Abhinav Agrawal, Madalina
Fiterau, Daniel Sheldon
- Abstract summary: We introduce noisy injective flows (NIF), a generalization of normalizing flows that can go across dimensions.
NIF explicitly map the latent space to a learnable manifold in a high-dimensional data space using injective transformations.
Empirically, we demonstrate that a simple application of our method to existing flow architectures can significantly improve sample quality and yield separable data embeddings.
- Score: 10.21537170623373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world data with underlying structure, such as pictures of faces, are
hypothesized to lie on a low-dimensional manifold. This manifold hypothesis has
motivated state-of-the-art generative algorithms that learn low-dimensional
data representations. Unfortunately, a popular generative model, normalizing
flows, cannot take advantage of this. Normalizing flows are based on successive
variable transformations that are, by design, incapable of learning
lower-dimensional representations. In this paper we introduce noisy injective
flows (NIF), a generalization of normalizing flows that can go across
dimensions. NIF explicitly map the latent space to a learnable manifold in a
high-dimensional data space using injective transformations. We further employ
an additive noise model to account for deviations from the manifold and
identify a stochastic inverse of the generative process. Empirically, we
demonstrate that a simple application of our method to existing flow
architectures can significantly improve sample quality and yield separable data
embeddings.
Related papers
- Implicit Manifold Gaussian Process Regression [49.0787777751317]
Gaussian process regression is widely used to provide well-calibrated uncertainty estimates.
It struggles with high-dimensional data because of the implicit low-dimensional manifold upon which the data actually lies.
In this paper we propose a technique capable of inferring implicit structure directly from data (labeled and unlabeled) in a fully differentiable way.
arXiv Detail & Related papers (2023-10-30T09:52:48Z) - Canonical normalizing flows for manifold learning [14.377143992248222]
We propose a canonical manifold learning flow method, where a novel objective enforces the transformation matrix to have few prominent and non-degenerate basis functions.
Canonical manifold flow yields a more efficient use of the latent space, automatically generating fewer prominent and distinct dimensions to represent data.
arXiv Detail & Related papers (2023-10-19T13:48:05Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - ManiFlow: Implicitly Representing Manifolds with Normalizing Flows [145.9820993054072]
Normalizing Flows (NFs) are flexible explicit generative models that have been shown to accurately model complex real-world data distributions.
We propose an optimization objective that recovers the most likely point on the manifold given a sample from the perturbed distribution.
Finally, we focus on 3D point clouds for which we utilize the explicit nature of NFs, i.e. surface normals extracted from the gradient of the log-likelihood and the log-likelihood itself.
arXiv Detail & Related papers (2022-08-18T16:07:59Z) - Joint Manifold Learning and Density Estimation Using Normalizing Flows [4.939777212813711]
We introduce two approaches, namely per-pixel penalized log-likelihood and hierarchical training, to answer the question.
We propose a single-step method for joint manifold learning and density estimation by disentangling the transformed space.
Results validate the superiority of the proposed methods in simultaneous manifold learning and density estimation.
arXiv Detail & Related papers (2022-06-07T13:35:14Z) - Nonlinear Isometric Manifold Learning for Injective Normalizing Flows [58.720142291102135]
We use isometries to separate manifold learning and density estimation.
We also employ autoencoders to design embeddings with explicit inverses that do not distort the probability distribution.
arXiv Detail & Related papers (2022-03-08T08:57:43Z) - Principal Manifold Flows [6.628230604022489]
We characterize the geometric structure of normalizing flows and understand the relationship between latent variables and samples using contours.
We introduce a novel class of normalizing flows, called principal manifold flows (PF), whose contours are its principal manifold.
We show that PFs can perform density estimation on data that lie on a manifold with variable dimensionality, which is not possible with existing normalizing flows.
arXiv Detail & Related papers (2022-02-14T20:58:15Z) - Funnels: Exact maximum likelihood with dimensionality reduction [6.201770337181472]
We use the SurVAE framework to construct dimension reducing surjective flows via a new layer, known as the funnel.
We demonstrate its efficacy on a variety of datasets, and show it improves upon or matches the performance of existing flows while having a reduced latent space size.
arXiv Detail & Related papers (2021-12-15T12:20:25Z) - Discrete Denoising Flows [87.44537620217673]
We introduce a new discrete flow-based model for categorical random variables: Discrete Denoising Flows (DDFs)
In contrast with other discrete flow-based models, our model can be locally trained without introducing gradient bias.
We show that DDFs outperform Discrete Flows on modeling a toy example, binary MNIST and Cityscapes segmentation maps, measured in log-likelihood.
arXiv Detail & Related papers (2021-07-24T14:47:22Z) - Rectangular Flows for Manifold Learning [38.63646804834534]
Normalizing flows are invertible neural networks with tractable change-of-volume terms.
Data of interest is typically assumed to live in some (often unknown) low-dimensional manifold embedded in high-dimensional ambient space.
We propose two methods to tractably the gradient of this term with respect to the parameters of the model.
arXiv Detail & Related papers (2021-06-02T18:30:39Z) - SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows [78.77808270452974]
SurVAE Flows is a modular framework for composable transformations that encompasses VAEs and normalizing flows.
We show that several recently proposed methods, including dequantization and augmented normalizing flows, can be expressed as SurVAE Flows.
arXiv Detail & Related papers (2020-07-06T13:13:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.