From Minimax Optimal Importance Sampling to Uniformly Ergodic Importance-tempered MCMC
- URL: http://arxiv.org/abs/2506.19186v1
- Date: Mon, 23 Jun 2025 23:05:06 GMT
- Title: From Minimax Optimal Importance Sampling to Uniformly Ergodic Importance-tempered MCMC
- Authors: Quan Zhou,
- Abstract summary: We make two closely related theoretical contributions to the use of importance sampling schemes.<n>First, we prove that the minimax optimal trial distribution with the target if and only if the target distribution has no atom with probability greater than $1/2$.<n>Second, we argue that it is often advantageous to run the Metropolis--Hastings algorithm with a tempered stationary distribution.
- Score: 4.662958544712181
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We make two closely related theoretical contributions to the use of importance sampling schemes. First, for independent sampling, we prove that the minimax optimal trial distribution coincides with the target if and only if the target distribution has no atom with probability greater than $1/2$, where "minimax" means that the worst-case asymptotic variance of the self-normalized importance sampling estimator is minimized. When a large atom exists, it should be downweighted by the trial distribution. A similar phenomenon holds for a continuous target distribution concentrated on a small set. Second, we argue that it is often advantageous to run the Metropolis--Hastings algorithm with a tempered stationary distribution, $\pi(x)^\beta$, and correct for the bias by importance weighting. The dynamics of this "importance-tempered" sampling scheme can be described by a continuous-time Markov chain. We prove that for one-dimensional targets with polynomial tails, $\pi(x) \propto (1 + |x|)^{-\gamma}$, this chain is uniformly ergodic if and only if $1/\gamma < \beta < (\gamma - 2)/\gamma$. These results suggest that for target distributions with light or polynomial tails of order $\gamma > 3$, importance tempering can improve the precision of time-average estimators and essentially eliminate the need for burn-in.
Related papers
- Near-Optimal Clustering in Mixture of Markov Chains [74.3828414695655]
We study the problem of clustering $T$ trajectories of length $H$, each generated by one of $K$ unknown ergodic Markov chains over a finite state space of size $S$.<n>We derive an instance-dependent, high-probability lower bound on the clustering error rate, governed by the weighted KL divergence between the transition kernels of the chains.<n>We then present a novel two-stage clustering algorithm.
arXiv Detail & Related papers (2025-06-02T05:10:40Z) - Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional dependencies for general score-mismatched diffusion samplers.<n>We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.<n>This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Provable Benefit of Annealed Langevin Monte Carlo for Non-log-concave Sampling [28.931489333515618]
We establish an oracle complexity of $widetildeOleft(fracdbeta2cal A2varepsilon6right)$ for the simple annealed Langevin Monte Carlo algorithm.<n>We show that $cal A$ represents the action of a curve of probability measures interpolating the target distribution $pi$ and a readily sampleable distribution.
arXiv Detail & Related papers (2024-07-24T02:15:48Z) - Estimation of entropy-regularized optimal transport maps between
non-compactly supported measures [15.857723276537248]
This paper addresses the problem of estimating entropy-regularized optimal transport maps with squared-Euclidean cost between source and target measures that are subGaussian.
arXiv Detail & Related papers (2023-11-20T17:18:21Z) - Fit Like You Sample: Sample-Efficient Generalized Score Matching from
Fast Mixing Diffusions [29.488555741982015]
We show a close connection between the mixing time of a broad class of Markov processes with generator $mathcalL$.
We adapt techniques to speed up Markov chains to construct better score-matching losses.
In particular, preconditioning' the diffusion can be translated to an appropriate preconditioning'' of the score loss.
arXiv Detail & Related papers (2023-06-15T17:58:42Z) - Robust Mean Estimation Without Moments for Symmetric Distributions [7.105512316884493]
We show that for a large class of symmetric distributions, the same error as in the Gaussian setting can be achieved efficiently.
We propose a sequence of efficient algorithms that approaches this optimal error.
Our algorithms are based on a generalization of the well-known filtering technique.
arXiv Detail & Related papers (2023-02-21T17:52:23Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Optimal Sub-Gaussian Mean Estimation in $\mathbb{R}$ [5.457150493905064]
We present a novel estimator with sub-Gaussian convergence.
Our estimator does not require prior knowledge of the variance.
Our estimator construction and analysis gives a framework generalizable to other problems.
arXiv Detail & Related papers (2020-11-17T02:47:24Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.