Importance Corrected Neural JKO Sampling
- URL: http://arxiv.org/abs/2407.20444v1
- Date: Mon, 29 Jul 2024 22:49:59 GMT
- Title: Importance Corrected Neural JKO Sampling
- Authors: Johannes Hertrich, Robert Gruhlke,
- Abstract summary: We combine continuous normalizing flows (CNFs) with rejection-resampling steps based on importance weights.
The arising model can be trained iteratively, reduces the reverse Kulback-Leibler (KL) loss function in each step and allows to generate iid samples.
Numerical examples show that our method yields accurate results on various test distributions including high-dimensional multimodal targets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to sample from an unnormalized probability density function, we propose to combine continuous normalizing flows (CNFs) with rejection-resampling steps based on importance weights. We relate the iterative training of CNFs with regularized velocity fields to a JKO scheme and prove convergence of the involved velocity fields to the velocity field of the Wasserstein gradient flow (WGF). The alternation of local flow steps and non-local rejection-resampling steps allows to overcome local minima or slow convergence of the WGF for multimodal distributions. Since the proposal of the rejection step is generated by the model itself, they do not suffer from common drawbacks of classical rejection schemes. The arising model can be trained iteratively, reduces the reverse Kulback-Leibler (KL) loss function in each step, allows to generate iid samples and moreover allows for evaluations of the generated underlying density. Numerical examples show that our method yields accurate results on various test distributions including high-dimensional multimodal targets and outperforms the state of the art in almost all cases significantly.
Related papers
- Diffusion-PINN Sampler [6.656265182236135]
We introduce a novel diffusion-based sampling algorithm that estimates the drift term by solving the governing partial differential equation of the log-density of the underlying SDE marginals via physics-informed neural networks (PINN)
We prove that the error of log-density approximation can be controlled by the PINN residual loss, enabling us to establish convergence guarantees of DPS.
arXiv Detail & Related papers (2024-10-20T09:02:16Z) - Liouville Flow Importance Sampler [2.3603292593876324]
We present the Liouville Flow Importance Sampler (LFIS), an innovative flow-based model for generating samples from unnormalized density functions.
LFIS learns a time-dependent velocity field that deterministically transports samples from a simple initial distribution to a complex target distribution.
We demonstrate the effectiveness of LFIS through its application to a range of benchmark problems, on many of which LFIS achieved state-of-the-art performance.
arXiv Detail & Related papers (2024-05-03T16:44:31Z) - Sampling in Unit Time with Kernel Fisher-Rao Flow [0.0]
We introduce a new mean-field ODE and corresponding interacting particle systems (IPS) for sampling from an unnormalized target density.
The IPS are gradient-free, available in closed form, and only require the ability to sample from a reference density and compute the (unnormalized) target-to-reference density ratio.
arXiv Detail & Related papers (2024-01-08T13:43:56Z) - Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion [56.38386580040991]
Consistency Trajectory Model (CTM) is a generalization of Consistency Models (CM)
CTM enables the efficient combination of adversarial training and denoising score matching loss to enhance performance.
Unlike CM, CTM's access to the score function can streamline the adoption of established controllable/conditional generation methods.
arXiv Detail & Related papers (2023-10-01T05:07:17Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Protein Design with Guided Discrete Diffusion [67.06148688398677]
A popular approach to protein design is to combine a generative model with a discriminative model for conditional sampling.
We propose diffusioN Optimized Sampling (NOS), a guidance method for discrete diffusion models.
NOS makes it possible to perform design directly in sequence space, circumventing significant limitations of structure-based methods.
arXiv Detail & Related papers (2023-05-31T16:31:24Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Relative Entropy Gradient Sampler for Unnormalized Distributions [14.060615420986796]
Relative entropy gradient sampler (REGS) for sampling from unnormalized distributions.
REGS is a particle method that seeks a sequence of simple nonlinear transforms iteratively pushing the initial samples from a reference distribution into the samples from an unnormalized target distribution.
arXiv Detail & Related papers (2021-10-06T14:10:38Z) - Stochastic Normalizing Flows [2.323220706791067]
We show that normalizing flows can be used to learn the transformation of a simple prior distribution.
We derive an efficient training procedure by which both the sampler's and the flow's parameters can be optimized end-to-end.
We illustrate the representational power, sampling efficiency and correctness of SNFs on several benchmarks including applications to molecular sampling systems in equilibrium.
arXiv Detail & Related papers (2020-02-16T23:29:32Z) - Learning Likelihoods with Conditional Normalizing Flows [54.60456010771409]
Conditional normalizing flows (CNFs) are efficient in sampling and inference.
We present a study of CNFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x)
arXiv Detail & Related papers (2019-11-29T19:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.