A note on concentration inequalities for the overlapped batch mean variance estimators for Markov chains
- URL: http://arxiv.org/abs/2505.08456v1
- Date: Tue, 13 May 2025 11:36:04 GMT
- Title: A note on concentration inequalities for the overlapped batch mean variance estimators for Markov chains
- Authors: Eric Moulines, Alexey Naumov, Sergey Samsonov,
- Abstract summary: We study the concentration properties of quadratic forms associated with Markov chains using the martingale decomposition method introduced by Atchad'e and Cattaneo 2014.<n>Our main result provides an explicit control of the $p$-th moment of the difference between the OBM estimator and the variance of the Markov chain.
- Score: 17.628247025892215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we study the concentration properties of quadratic forms associated with Markov chains using the martingale decomposition method introduced by Atchad\'e and Cattaneo (2014). In particular, we derive concentration inequalities for the overlapped batch mean (OBM) estimators of the asymptotic variance for uniformly geometrically ergodic Markov chains. Our main result provides an explicit control of the $p$-th moment of the difference between the OBM estimator and the asymptotic variance of the Markov chain with explicit dependence upon $p$ and mixing time of the underlying Markov chain.
Related papers
- Attention (as Discrete-Time Markov) Chains [70.46604474584181]
We introduce a new interpretation of the attention matrix as a discrete-time Markov chain.<n>Our main observation is that tokens corresponding to semantically similar regions form a set of metastable states.<n>Using these lightweight tools, we demonstrate state-of-the-art zero-shot segmentation.
arXiv Detail & Related papers (2025-07-23T16:20:47Z) - Uncertainty quantification for Markov chains with application to temporal difference learning [63.49764856675643]
We develop novel high-dimensional concentration inequalities and Berry-Esseen bounds for vector- and matrix-valued functions of Markov chains.<n>We analyze the TD learning algorithm, a widely used method for policy evaluation in reinforcement learning.
arXiv Detail & Related papers (2025-02-19T15:33:55Z) - Characterizing Dependence of Samples along the Langevin Dynamics and Algorithms via Contraction of $Φ$-Mutual Information [16.54557731304283]
We study the question of how fast the samples become approximately independent along popular Markov chains for continuous-space sampling.<n>Our proof technique is based on showing the Strong Data Processing Inequalities (SDPIs) hold along the Markov chains.
arXiv Detail & Related papers (2024-02-26T23:05:02Z) - Ito Diffusion Approximation of Universal Ito Chains for Sampling, Optimization and Boosting [64.0722630873758]
We consider rather general and broad class of Markov chains, Ito chains, that look like Euler-Maryama discretization of some Differential Equation.
We prove the bound in $W_2$-distance between the laws of our Ito chain and differential equation.
arXiv Detail & Related papers (2023-10-09T18:38:56Z) - Hoeffding's Inequality for Markov Chains under Generalized
Concentrability Condition [15.228649445346473]
This paper studies Hoeffding's inequality for Markov chains under the generalized concentrability condition defined via integral probability metric (IPM)
The flexibility of our framework allows Hoeffding's inequality to be applied beyond the ergodic Markov chains in the traditional sense.
arXiv Detail & Related papers (2023-10-04T16:21:23Z) - Covariate shift in nonparametric regression with Markovian design [0.0]
We show that convergence rates for a smoothness risk of a Nadaraya-Watson kernel estimator are determined by the similarity between the invariant distributions associated to source and target Markov chains.
We extend the notion of a distribution exponent from Kpotufe and Martinet to kernel transfer exponents of uniformly ergodic Markov chains.
arXiv Detail & Related papers (2023-07-17T14:24:27Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Rosenthal-type inequalities for linear statistics of Markov chains [20.606986885851573]
We establish novel deviation bounds for additive functionals of geometrically ergodic Markov chains.
We pay special attention to the dependence of our bounds on the mixing time of the corresponding chain.
arXiv Detail & Related papers (2023-03-10T10:24:46Z) - Score-Based Diffusion meets Annealed Importance Sampling [89.92133671626327]
Annealed Importance Sampling remains one of the most effective methods for marginal likelihood estimation.
We leverage recent progress in score-based generative modeling to approximate the optimal extended target distribution for AIS proposals.
arXiv Detail & Related papers (2022-08-16T12:13:29Z) - Concentration inequality for U-statistics of order two for uniformly
ergodic Markov chains [0.0]
We prove a concentration inequality for U-statistics of order two for uniformly ergodic Markov chains.
We show that we can recover the convergence rate of Arcones and Gin'e who proved a concentration result for U-statistics of independent random variables and canonical kernels.
arXiv Detail & Related papers (2020-11-20T15:14:34Z) - MCMC-Interactive Variational Inference [56.58416764959414]
We propose MCMC-interactive variational inference (MIVI) to estimate the posterior in a time constrained manner.
MIVI takes advantage of the complementary properties of variational inference and MCMC to encourage mutual improvement.
Experiments show that MIVI not only accurately approximates the posteriors but also facilitates designs of gradient MCMC and Gibbs sampling transitions.
arXiv Detail & Related papers (2020-10-02T17:43:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.