Canonical normalizing flows for manifold learning
- URL: http://arxiv.org/abs/2310.12743v2
- Date: Tue, 31 Oct 2023 16:24:18 GMT
- Title: Canonical normalizing flows for manifold learning
- Authors: Kyriakos Flouris and Ender Konukoglu
- Abstract summary: We propose a canonical manifold learning flow method, where a novel objective enforces the transformation matrix to have few prominent and non-degenerate basis functions.
Canonical manifold flow yields a more efficient use of the latent space, automatically generating fewer prominent and distinct dimensions to represent data.
- Score: 14.377143992248222
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Manifold learning flows are a class of generative modelling techniques that
assume a low-dimensional manifold description of the data. The embedding of
such a manifold into the high-dimensional space of the data is achieved via
learnable invertible transformations. Therefore, once the manifold is properly
aligned via a reconstruction loss, the probability density is tractable on the
manifold and maximum likelihood can be used to optimize the network parameters.
Naturally, the lower-dimensional representation of the data requires an
injective-mapping. Recent approaches were able to enforce that the density
aligns with the modelled manifold, while efficiently calculating the density
volume-change term when embedding to the higher-dimensional space. However,
unless the injective-mapping is analytically predefined, the learned manifold
is not necessarily an efficient representation of the data. Namely, the latent
dimensions of such models frequently learn an entangled intrinsic basis, with
degenerate information being stored in each dimension. Alternatively, if a
locally orthogonal and/or sparse basis is to be learned, here coined canonical
intrinsic basis, it can serve in learning a more compact latent space
representation. Toward this end, we propose a canonical manifold learning flow
method, where a novel optimization objective enforces the transformation matrix
to have few prominent and non-degenerate basis functions. We demonstrate that
by minimizing the off-diagonal manifold metric elements $\ell_1$-norm, we can
achieve such a basis, which is simultaneously sparse and/or orthogonal.
Canonical manifold flow yields a more efficient use of the latent space,
automatically generating fewer prominent and distinct dimensions to represent
data, and a better approximation of target distributions than other manifold
flow methods in most experiments we conducted, resulting in lower FID scores.
Related papers
- Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - Implicit Manifold Gaussian Process Regression [49.0787777751317]
Gaussian process regression is widely used to provide well-calibrated uncertainty estimates.
It struggles with high-dimensional data because of the implicit low-dimensional manifold upon which the data actually lies.
In this paper we propose a technique capable of inferring implicit structure directly from data (labeled and unlabeled) in a fully differentiable way.
arXiv Detail & Related papers (2023-10-30T09:52:48Z) - Convolutional Filtering on Sampled Manifolds [122.06927400759021]
We show that convolutional filtering on a sampled manifold converges to continuous manifold filtering.
Our findings are further demonstrated empirically on a problem of navigation control.
arXiv Detail & Related papers (2022-11-20T19:09:50Z) - The Manifold Scattering Transform for High-Dimensional Point Cloud Data [16.500568323161563]
We present practical schemes for implementing the manifold scattering transform to datasets arising in naturalistic systems.
We show that our methods are effective for signal classification and manifold classification tasks.
arXiv Detail & Related papers (2022-06-21T02:15:00Z) - Joint Manifold Learning and Density Estimation Using Normalizing Flows [4.939777212813711]
We introduce two approaches, namely per-pixel penalized log-likelihood and hierarchical training, to answer the question.
We propose a single-step method for joint manifold learning and density estimation by disentangling the transformed space.
Results validate the superiority of the proposed methods in simultaneous manifold learning and density estimation.
arXiv Detail & Related papers (2022-06-07T13:35:14Z) - Nonlinear Isometric Manifold Learning for Injective Normalizing Flows [58.720142291102135]
We use isometries to separate manifold learning and density estimation.
We also employ autoencoders to design embeddings with explicit inverses that do not distort the probability distribution.
arXiv Detail & Related papers (2022-03-08T08:57:43Z) - Rectangular Flows for Manifold Learning [38.63646804834534]
Normalizing flows are invertible neural networks with tractable change-of-volume terms.
Data of interest is typically assumed to live in some (often unknown) low-dimensional manifold embedded in high-dimensional ambient space.
We propose two methods to tractably the gradient of this term with respect to the parameters of the model.
arXiv Detail & Related papers (2021-06-02T18:30:39Z) - A Local Similarity-Preserving Framework for Nonlinear Dimensionality
Reduction with Neural Networks [56.068488417457935]
We propose a novel local nonlinear approach named Vec2vec for general purpose dimensionality reduction.
To train the neural network, we build the neighborhood similarity graph of a matrix and define the context of data points.
Experiments of data classification and clustering on eight real datasets show that Vec2vec is better than several classical dimensionality reduction methods in the statistical hypothesis test.
arXiv Detail & Related papers (2021-03-10T23:10:47Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Normalizing Flows Across Dimensions [10.21537170623373]
We introduce noisy injective flows (NIF), a generalization of normalizing flows that can go across dimensions.
NIF explicitly map the latent space to a learnable manifold in a high-dimensional data space using injective transformations.
Empirically, we demonstrate that a simple application of our method to existing flow architectures can significantly improve sample quality and yield separable data embeddings.
arXiv Detail & Related papers (2020-06-23T14:47:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.