Adaptive Annealed Importance Sampling with Constant Rate Progress
- URL: http://arxiv.org/abs/2306.15283v1
- Date: Tue, 27 Jun 2023 08:15:28 GMT
- Title: Adaptive Annealed Importance Sampling with Constant Rate Progress
- Authors: Shirin Goshtasbpour, Victor Cohen, Fernando Perez-Cruz
- Abstract summary: Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
- Score: 68.8204255655161
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Annealed Importance Sampling (AIS) synthesizes weighted samples from an
intractable distribution given its unnormalized density function. This
algorithm relies on a sequence of interpolating distributions bridging the
target to an initial tractable distribution such as the well-known geometric
mean path of unnormalized distributions which is assumed to be suboptimal in
general. In this paper, we prove that the geometric annealing corresponds to
the distribution path that minimizes the KL divergence between the current
particle distribution and the desired target when the feasible change in the
particle distribution is constrained. Following this observation, we derive the
constant rate discretization schedule for this annealing sequence, which
adjusts the schedule to the difficulty of moving samples between the initial
and the target distributions. We further extend our results to $f$-divergences
and present the respective dynamics of annealing sequences based on which we
propose the Constant Rate AIS (CR-AIS) algorithm and its efficient
implementation for $\alpha$-divergences. We empirically show that CR-AIS
performs well on multiple benchmark distributions while avoiding the
computationally expensive tuning loop in existing Adaptive AIS.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Mixing of the No-U-Turn Sampler and the Geometry of Gaussian Concentration [0.0]
We prove that the mixing time of the No-U-Turn Sampler (NUTS) scales as $d1/4$, up to logarithmic factors, where $d$ is the dimension.
Specifically, concentration of measure results in a striking uniformity in NUTS' locally adapted transitions, which holds with high probability.
arXiv Detail & Related papers (2024-10-09T15:17:01Z) - Soft-constrained Schrodinger Bridge: a Stochastic Control Approach [4.922305511803267]
Schr"odinger bridge can be viewed as a continuous-time control problem where the goal is to find an optimally controlled diffusion process.
We propose to generalize this problem by allowing the terminal distribution to differ from the target but penalizing the Kullback-Leibler divergence between the two distributions.
One application is the development of robust generative diffusion models.
arXiv Detail & Related papers (2024-03-04T04:10:24Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Noise-Free Sampling Algorithms via Regularized Wasserstein Proximals [3.4240632942024685]
We consider the problem of sampling from a distribution governed by a potential function.
This work proposes an explicit score based MCMC method that is deterministic, resulting in a deterministic evolution for particles.
arXiv Detail & Related papers (2023-08-28T23:51:33Z) - Score-Based Diffusion meets Annealed Importance Sampling [89.92133671626327]
Annealed Importance Sampling remains one of the most effective methods for marginal likelihood estimation.
We leverage recent progress in score-based generative modeling to approximate the optimal extended target distribution for AIS proposals.
arXiv Detail & Related papers (2022-08-16T12:13:29Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Relative Entropy Gradient Sampler for Unnormalized Distributions [14.060615420986796]
Relative entropy gradient sampler (REGS) for sampling from unnormalized distributions.
REGS is a particle method that seeks a sequence of simple nonlinear transforms iteratively pushing the initial samples from a reference distribution into the samples from an unnormalized target distribution.
arXiv Detail & Related papers (2021-10-06T14:10:38Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.