Diffusion Density Estimators
- URL: http://arxiv.org/abs/2410.06986v1
- Date: Wed, 9 Oct 2024 15:21:53 GMT
- Title: Diffusion Density Estimators
- Authors: Akhil Premkumar,
- Abstract summary: We introduce a new, highly parallelizable method that computes log densities without the need to solve a flow.
Our approach is based on estimating a path integral by Monte Carlo, in a manner identical to the simulation-free training of diffusion models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the use of diffusion models as neural density estimators. The current approach to this problem involves converting the generative process to a smooth flow, known as the Probability Flow ODE. The log density at a given sample can be obtained by solving the ODE with a black-box solver. We introduce a new, highly parallelizable method that computes log densities without the need to solve a flow. Our approach is based on estimating a path integral by Monte Carlo, in a manner identical to the simulation-free training of diffusion models. We also study how different training parameters affect the accuracy of the density calculation, and offer insights into how these models can be made more scalable and efficient.
Related papers
- Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - New algorithms for sampling and diffusion models [0.0]
We introduce a novel sampling method for known distributions and a new algorithm for diffusion generative models with unknown distributions.
Our approach is inspired by the concept of the reverse diffusion process, widely adopted in diffusion generative models.
arXiv Detail & Related papers (2024-06-14T02:30:04Z) - Sobolev Space Regularised Pre Density Models [51.558848491038916]
We propose a new approach to non-parametric density estimation that is based on regularizing a Sobolev norm of the density.
This method is statistically consistent, and makes the inductive validation model clear and consistent.
arXiv Detail & Related papers (2023-07-25T18:47:53Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Fast Sampling of Diffusion Models via Operator Learning [74.37531458470086]
We use neural operators, an efficient method to solve the probability flow differential equations, to accelerate the sampling process of diffusion models.
Compared to other fast sampling methods that have a sequential nature, we are the first to propose a parallel decoding method.
We show our method achieves state-of-the-art FID of 3.78 for CIFAR-10 and 7.83 for ImageNet-64 in the one-model-evaluation setting.
arXiv Detail & Related papers (2022-11-24T07:30:27Z) - Probability flow solution of the Fokker-Planck equation [10.484851004093919]
We introduce an alternative scheme based on integrating an ordinary differential equation that describes the flow of probability.
Unlike the dynamics, this equation deterministically pushes samples from the initial density onto samples from the solution at any later time.
Our approach is based on recent advances in score-based diffusion for generative modeling.
arXiv Detail & Related papers (2022-06-09T17:37:09Z) - Large-Scale Wasserstein Gradient Flows [84.73670288608025]
We introduce a scalable scheme to approximate Wasserstein gradient flows.
Our approach relies on input neural networks (ICNNs) to discretize the JKO steps.
As a result, we can sample from the measure at each step of the gradient diffusion and compute its density.
arXiv Detail & Related papers (2021-06-01T19:21:48Z) - Conditional Density Estimation via Weighted Logistic Regressions [0.30458514384586394]
We propose a novel parametric conditional density estimation method by showing the connection between the general density and the likelihood function of inhomogeneous process models.
The maximum likelihood estimates can be obtained via weighted logistic regressions, and the computation can be significantly relaxed by combining a block-wise alternating scheme and local case-control sampling.
arXiv Detail & Related papers (2020-10-21T11:08:25Z) - Learning Generative Models using Denoising Density Estimators [29.068491722778827]
We introduce a new generative model based on denoising density estimators (DDEs)
Our main contribution is a novel technique to obtain generative models by minimizing the KL-divergence directly.
Experimental results demonstrate substantial improvement in density estimation and competitive performance in generative model training.
arXiv Detail & Related papers (2020-01-08T20:30:40Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.