Rectangular Flows for Manifold Learning
- URL: http://arxiv.org/abs/2106.01413v1
- Date: Wed, 2 Jun 2021 18:30:39 GMT
- Title: Rectangular Flows for Manifold Learning
- Authors: Anthony L. Caterini, Gabriel Loaiza-Ganem, Geoff Pleiss, John P.
Cunningham
- Abstract summary: Normalizing flows are invertible neural networks with tractable change-of-volume terms.
Data of interest is typically assumed to live in some (often unknown) low-dimensional manifold embedded in high-dimensional ambient space.
We propose two methods to tractably the gradient of this term with respect to the parameters of the model.
- Score: 38.63646804834534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Normalizing flows are invertible neural networks with tractable
change-of-volume terms, which allows optimization of their parameters to be
efficiently performed via maximum likelihood. However, data of interest is
typically assumed to live in some (often unknown) low-dimensional manifold
embedded in high-dimensional ambient space. The result is a modelling mismatch
since -- by construction -- the invertibility requirement implies
high-dimensional support of the learned distribution. Injective flows, mapping
from low- to high-dimensional space, aim to fix this discrepancy by learning
distributions on manifolds, but the resulting volume-change term becomes more
challenging to evaluate. Current approaches either avoid computing this term
entirely using various heuristics, or assume the manifold is known beforehand
and therefore are not widely applicable. Instead, we propose two methods to
tractably calculate the gradient of this term with respect to the parameters of
the model, relying on careful use of automatic differentiation and techniques
from numerical linear algebra. Both approaches perform end-to-end nonlinear
manifold learning and density estimation for data projected onto this manifold.
We study the trade-offs between our proposed methods, empirically verify that
we outperform approaches ignoring the volume-change term by more accurately
learning manifolds and the corresponding distributions on them, and show
promising results on out-of-distribution detection.
Related papers
- Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - Canonical normalizing flows for manifold learning [14.377143992248222]
We propose a canonical manifold learning flow method, where a novel objective enforces the transformation matrix to have few prominent and non-degenerate basis functions.
Canonical manifold flow yields a more efficient use of the latent space, automatically generating fewer prominent and distinct dimensions to represent data.
arXiv Detail & Related papers (2023-10-19T13:48:05Z) - Generative Modeling on Manifolds Through Mixture of Riemannian Diffusion Processes [57.396578974401734]
We introduce a principled framework for building a generative diffusion process on general manifold.
Instead of following the denoising approach of previous diffusion models, we construct a diffusion process using a mixture of bridge processes.
We develop a geometric understanding of the mixture process, deriving the drift as a weighted mean of tangent directions to the data points.
arXiv Detail & Related papers (2023-10-11T06:04:40Z) - Out-of-distribution detection using normalizing flows on the data
manifold [3.725042082196983]
This study investigates the effect of manifold learning using normalizing flows on out-of-distribution detection.
We show that manifold learning improves the out-of-distribution detection ability of a class of likelihood-based models known as normalizing flows.
arXiv Detail & Related papers (2023-08-26T07:35:16Z) - Joint Manifold Learning and Density Estimation Using Normalizing Flows [4.939777212813711]
We introduce two approaches, namely per-pixel penalized log-likelihood and hierarchical training, to answer the question.
We propose a single-step method for joint manifold learning and density estimation by disentangling the transformed space.
Results validate the superiority of the proposed methods in simultaneous manifold learning and density estimation.
arXiv Detail & Related papers (2022-06-07T13:35:14Z) - Improving Diffusion Models for Inverse Problems using Manifold Constraints [55.91148172752894]
We show that current solvers throw the sample path off the data manifold, and hence the error accumulates.
To address this, we propose an additional correction term inspired by the manifold constraint.
We show that our method is superior to the previous methods both theoretically and empirically.
arXiv Detail & Related papers (2022-06-02T09:06:10Z) - Diagnosing and Fixing Manifold Overfitting in Deep Generative Models [11.82509693248749]
Likelihood-based, or explicit, deep generative models use neural networks to construct flexible high-dimensional densities.
We show that observed data lies on a low-dimensional manifold embedded in high-dimensional ambient space.
We propose a class of two-step procedures consisting of a dimensionality reduction step followed by maximum-likelihood density estimation.
arXiv Detail & Related papers (2022-04-14T18:00:03Z) - Nonlinear Isometric Manifold Learning for Injective Normalizing Flows [58.720142291102135]
We use isometries to separate manifold learning and density estimation.
We also employ autoencoders to design embeddings with explicit inverses that do not distort the probability distribution.
arXiv Detail & Related papers (2022-03-08T08:57:43Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.