Neural Inverse Transform Sampler
- URL: http://arxiv.org/abs/2206.11172v1
- Date: Wed, 22 Jun 2022 15:28:29 GMT
- Title: Neural Inverse Transform Sampler
- Authors: Henry Li, Yuval Kluger
- Abstract summary: We show that when modeling conditional densities with a neural network, $Z$ can be exactly and efficiently computed.
We introduce the textbfNeural Inverse Transform Sampler (NITS), a novel deep learning framework for modeling and sampling from general, multidimensional, compactly-supported probability densities.
- Score: 4.061135251278187
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Any explicit functional representation $f$ of a density is hampered by two
main obstacles when we wish to use it as a generative model: designing $f$ so
that sampling is fast, and estimating $Z = \int f$ so that $Z^{-1}f$ integrates
to 1. This becomes increasingly complicated as $f$ itself becomes complicated.
In this paper, we show that when modeling one-dimensional conditional densities
with a neural network, $Z$ can be exactly and efficiently computed by letting
the network represent the cumulative distribution function of a target density,
and applying a generalized fundamental theorem of calculus. We also derive a
fast algorithm for sampling from the resulting representation by the inverse
transform method. By extending these principles to higher dimensions, we
introduce the \textbf{Neural Inverse Transform Sampler (NITS)}, a novel deep
learning framework for modeling and sampling from general, multidimensional,
compactly-supported probability densities. NITS is a highly expressive density
estimator that boasts end-to-end differentiability, fast sampling, and exact
and cheap likelihood evaluation. We demonstrate the applicability of NITS by
applying it to realistic, high-dimensional density estimation tasks:
likelihood-based generative modeling on the CIFAR-10 dataset, and density
estimation on the UCI suite of benchmark datasets, where NITS produces
compelling results rivaling or surpassing the state of the art.
Related papers
- Adaptivity and Convergence of Probability Flow ODEs in Diffusion Generative Models [5.064404027153094]
This paper contributes to establishing theoretical guarantees for the probability flow ODE, a diffusion-based sampler known for its practical efficiency.
We demonstrate that, with accurate score function estimation, the probability flow ODE sampler achieves a convergence rate of $O(k/T)$ in total variation distance.
This dimension-free convergence rate improves upon existing results that scale with the typically much larger ambient dimension.
arXiv Detail & Related papers (2025-01-31T03:10:10Z) - Parallel simulation for sampling under isoperimetry and score-based diffusion models [56.39904484784127]
As data size grows, reducing the iteration cost becomes an important goal.
Inspired by the success of the parallel simulation of the initial value problem in scientific computation, we propose parallel Picard methods for sampling tasks.
Our work highlights the potential advantages of simulation methods in scientific computation for dynamics-based sampling and diffusion models.
arXiv Detail & Related papers (2024-12-10T11:50:46Z) - O(d/T) Convergence Theory for Diffusion Probabilistic Models under Minimal Assumptions [6.76974373198208]
We establish a fast convergence theory for the denoising diffusion probabilistic model (DDPM) under minimal assumptions.
We show that the convergence rate improves to $O(k/T)$, where $k$ is the intrinsic dimension of the target data distribution.
This highlights the ability of DDPM to automatically adapt to unknown low-dimensional structures.
arXiv Detail & Related papers (2024-09-27T17:59:10Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Adversarial Likelihood Estimation With One-Way Flows [44.684952377918904]
Generative Adversarial Networks (GANs) can produce high-quality samples, but do not provide an estimate of the probability density around the samples.
We show that our method converges faster, produces comparable sample quality to GANs with similar architecture, successfully avoids over-fitting to commonly used datasets and produces smooth low-dimensional latent representations of the training data.
arXiv Detail & Related papers (2023-07-19T10:26:29Z) - Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative
Models [49.81937966106691]
We develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models.
In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach.
arXiv Detail & Related papers (2023-06-15T16:30:08Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Using Intermediate Forward Iterates for Intermediate Generator
Optimization [14.987013151525368]
Intermediate Generator Optimization can be incorporated into any standard autoencoder pipeline for the generative task.
We show applications of the IGO on two dense predictive tasks viz., image extrapolation, and point cloud denoising.
arXiv Detail & Related papers (2023-02-05T08:46:15Z) - Your GAN is Secretly an Energy-based Model and You Should use
Discriminator Driven Latent Sampling [106.68533003806276]
We show that sampling in latent space can be achieved by sampling in latent space according to an energy-based model induced by the sum of the latent prior log-density and the discriminator output score.
We show that Discriminator Driven Latent Sampling(DDLS) is highly efficient compared to previous methods which work in the high-dimensional pixel space.
arXiv Detail & Related papers (2020-03-12T23:33:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.