ManiFlow: Implicitly Representing Manifolds with Normalizing Flows
- URL: http://arxiv.org/abs/2208.08932v1
- Date: Thu, 18 Aug 2022 16:07:59 GMT
- Title: ManiFlow: Implicitly Representing Manifolds with Normalizing Flows
- Authors: Janis Postels, Martin Danelljan, Luc Van Gool, Federico Tombari
- Abstract summary: Normalizing Flows (NFs) are flexible explicit generative models that have been shown to accurately model complex real-world data distributions.
We propose an optimization objective that recovers the most likely point on the manifold given a sample from the perturbed distribution.
Finally, we focus on 3D point clouds for which we utilize the explicit nature of NFs, i.e. surface normals extracted from the gradient of the log-likelihood and the log-likelihood itself.
- Score: 145.9820993054072
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Normalizing Flows (NFs) are flexible explicit generative models that have
been shown to accurately model complex real-world data distributions. However,
their invertibility constraint imposes limitations on data distributions that
reside on lower dimensional manifolds embedded in higher dimensional space.
Practically, this shortcoming is often bypassed by adding noise to the data
which impacts the quality of the generated samples. In contrast to prior work,
we approach this problem by generating samples from the original data
distribution given full knowledge about the perturbed distribution and the
noise model. To this end, we establish that NFs trained on perturbed data
implicitly represent the manifold in regions of maximum likelihood. Then, we
propose an optimization objective that recovers the most likely point on the
manifold given a sample from the perturbed distribution. Finally, we focus on
3D point clouds for which we utilize the explicit nature of NFs, i.e. surface
normals extracted from the gradient of the log-likelihood and the
log-likelihood itself, to apply Poisson surface reconstruction to refine
generated point sets.
Related papers
- GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection [60.78684630040313]
Diffusion models tend to reconstruct normal counterparts of test images with certain noises added.
From the global perspective, the difficulty of reconstructing images with different anomalies is uneven.
We propose a global and local adaptive diffusion model (abbreviated to GLAD) for unsupervised anomaly detection.
arXiv Detail & Related papers (2024-06-11T17:27:23Z) - Out-of-distribution detection using normalizing flows on the data
manifold [3.725042082196983]
This study investigates the effect of manifold learning using normalizing flows on out-of-distribution detection.
We show that manifold learning improves the out-of-distribution detection ability of a class of likelihood-based models known as normalizing flows.
arXiv Detail & Related papers (2023-08-26T07:35:16Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Resampling Base Distributions of Normalizing Flows [0.0]
We introduce a base distribution for normalizing flows based on learned rejection sampling.
We develop suitable learning algorithms using both maximizing the log-likelihood and the optimization of the reverse Kullback-Leibler divergence.
arXiv Detail & Related papers (2021-10-29T14:44:44Z) - Flow Based Models For Manifold Data [11.344428134774475]
Flow-based generative models typically define a latent space with dimensionality identical to the observational space.
In many problems, the data does not populate the full ambient data-space that they reside in, rather a lower-dimensional manifold.
We propose to learn a manifold prior that affords benefits to both sample generation and representation quality.
arXiv Detail & Related papers (2021-09-29T06:48:01Z) - Discrete Denoising Flows [87.44537620217673]
We introduce a new discrete flow-based model for categorical random variables: Discrete Denoising Flows (DDFs)
In contrast with other discrete flow-based models, our model can be locally trained without introducing gradient bias.
We show that DDFs outperform Discrete Flows on modeling a toy example, binary MNIST and Cityscapes segmentation maps, measured in log-likelihood.
arXiv Detail & Related papers (2021-07-24T14:47:22Z) - Tractable Density Estimation on Learned Manifolds with Conformal
Embedding Flows [0.0]
Normalizing flows provide tractable density estimation by transforming a simple base distribution into a complex target distribution.
Recent attempts to remedy this have introduced geometric complications that defeat a central benefit of normalizing flows: exact density estimation.
We argue that composing a standard flow with a trainable conformal embedding is the most natural way to model manifold-supported data.
arXiv Detail & Related papers (2021-06-09T18:00:00Z) - Learning Energy-Based Models by Diffusion Recovery Likelihood [61.069760183331745]
We present a diffusion recovery likelihood method to tractably learn and sample from a sequence of energy-based models.
After training, synthesized images can be generated by the sampling process that initializes from Gaussian white noise distribution.
On unconditional CIFAR-10 our method achieves FID 9.58 and inception score 8.30, superior to the majority of GANs.
arXiv Detail & Related papers (2020-12-15T07:09:02Z) - Flow-Based Likelihoods for Non-Gaussian Inference [0.0]
We investigate the use of data-driven likelihoods to bypass a key assumption made in many scientific analyses.
We show that the likelihood can be reconstructed to a precision equal to that of sampling error due to a finite sample size.
By introducing a suite of tests that can capture different levels of NG in the data, we show that the success or failure of traditional data-driven likelihoods can be tied back to the structure of the NG in the data.
arXiv Detail & Related papers (2020-07-10T18:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.