Monte Carlo and quasi-Monte Carlo integration for likelihood functions
- URL: http://arxiv.org/abs/2506.21733v1
- Date: Thu, 26 Jun 2025 19:39:30 GMT
- Title: Monte Carlo and quasi-Monte Carlo integration for likelihood functions
- Authors: Yanbo Tang,
- Abstract summary: We compare the integration error of Monte Carlo (MC) and quasi-Monte Carlo (QMC) methods for approximating the normalizing constant of posterior distributions and certain marginal likelihoods.<n>We find that if the dimension of the integral remains fixed as $n$ and $m$ tend to infinity, the scaling rate of the relative error of MC integration includes an additional $n1/2log(n)p/2$ data-dependent factor.<n>An additional contribution of this work is a bound on the high-dimensional scaling of the star discrepancy for the Halton sequence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We compare the integration error of Monte Carlo (MC) and quasi-Monte Carlo (QMC) methods for approximating the normalizing constant of posterior distributions and certain marginal likelihoods. In doing so, we characterize the dependency of the relative and absolute integration errors on the number of data points ($n$), the number of grid points ($m$) and the dimension of the integral ($p$). We find that if the dimension of the integral remains fixed as $n$ and $m$ tend to infinity, the scaling rate of the relative error of MC integration includes an additional $n^{1/2}\log(n)^{p/2}$ data-dependent factor, while for QMC this factor is $\log(n)^{p/2}$. In this scenario, QMC will outperform MC if $\log(m)^{p - 1/2}/\sqrt{mn\log(n)} < 1$, which differs from the usual result that QMC will outperform MC if $\log(m)^p/m^{1/2} < 1$.The accuracies of MC and QMC methods are also examined in the high-dimensional setting as $p \rightarrow \infty$, where MC gives more optimistic results as the scaling in dimension is slower than that of QMC when the Halton sequence is used to construct the low discrepancy grid; however both methods display poor dimensional scaling as expected. An additional contribution of this work is a bound on the high-dimensional scaling of the star discrepancy for the Halton sequence.
Related papers
- Randomized Quasi-Monte Carlo Features for Kernel Approximation [3.105656247358225]
We investigate the application of randomized quasi-Monte Carlo (RQMC) methods in random feature approximations for kernel-based learning.<n>Compared to the classical Monte Carlo (MC) approach citeprahimirandom, RQMC improves the deterministic approximation error bound.<n>We show that RQMC methods maintain stable performance in both low and moderately high-dimensional settings.
arXiv Detail & Related papers (2025-03-08T03:38:28Z) - Measuring quantum relative entropy with finite-size effect [53.64687146666141]
We study the estimation of relative entropy $D(rho|sigma)$ when $sigma$ is known.<n>Our estimator attains the Cram'er-Rao type bound when the dimension $d$ is fixed.
arXiv Detail & Related papers (2024-06-25T06:07:20Z) - Efficient Sampling on Riemannian Manifolds via Langevin MCMC [51.825900634131486]
We study the task efficiently sampling from a Gibbs distribution $d pi* = eh d vol_g$ over aian manifold $M$ via (geometric) Langevin MCMC.
Our results apply to general settings where $pi*$ can be non exponential and $Mh$ can have negative Ricci curvature.
arXiv Detail & Related papers (2024-02-15T22:59:14Z) - SMC Is All You Need: Parallel Strong Scaling [0.695967916921061]
We develop a fully parallel sequential Monte Carlo (pSMC) method which provably delivers parallel strong scaling.
The pSMC converges to infinitesimal accuracy MSE$=O(varepsilon2)$ with a fixed finite time-complexity Cost$=O(1)$ and with no efficiency leakage.
arXiv Detail & Related papers (2024-02-09T04:13:38Z) - Convergence Analysis of Stochastic Gradient Descent with MCMC Estimators [8.493584276672971]
gradient descent (SGD) and its variants is essential for machine learning.
In this paper, we consider the SGD algorithm that employ the Markov Chain Monte Carlo (MCMC) estimator to compute the gradient.
It is shown that MCMC-SGD escapes from saddle points and reaches $(epsilon,epsilon1/4)$ approximate second order stationary points.
arXiv Detail & Related papers (2023-03-19T08:29:49Z) - Polyak-Ruppert Averaged Q-Leaning is Statistically Efficient [90.14768299744792]
We study synchronous Q-learning with Polyak-Ruppert averaging (a.k.a., averaged Q-leaning) in a $gamma$-discounted MDP.
We establish normality for the iteration averaged $barboldsymbolQ_T$.
In short, our theoretical analysis shows averaged Q-Leaning is statistically efficient.
arXiv Detail & Related papers (2021-12-29T14:47:56Z) - Mean-Square Analysis with An Application to Optimal Dimension Dependence
of Langevin Monte Carlo [60.785586069299356]
This work provides a general framework for the non-asymotic analysis of sampling error in 2-Wasserstein distance.
Our theoretical analysis is further validated by numerical experiments.
arXiv Detail & Related papers (2021-09-08T18:00:05Z) - Mixing Time Guarantees for Unadjusted Hamiltonian Monte Carlo [1.14219428942199]
We provide quantitative upper bounds on the total variation mixing time of the Markov chain corresponding to the unadjusted Hamiltonian Monte Carlo (uHMC) algorithm.
For two general classes of models and fixed time discretization step size $h$, the mixing time is shown to depend only logarithmically on the dimension.
We show that an $varepsilon$-accurate approximation of the target distribution $mu$ in total variation distance can be achieved by uHMC.
arXiv Detail & Related papers (2021-05-03T14:13:47Z) - A New Framework for Variance-Reduced Hamiltonian Monte Carlo [88.84622104944503]
We propose a new framework of variance-reduced Hamiltonian Monte Carlo (HMC) methods for sampling from an $L$-smooth and $m$-strongly log-concave distribution.
We show that the unbiased gradient estimators, including SAGA and SVRG, based HMC methods achieve highest gradient efficiency with small batch size.
Experimental results on both synthetic and real-world benchmark data show that our new framework significantly outperforms the full gradient and gradient HMC approaches.
arXiv Detail & Related papers (2021-02-09T02:44:24Z) - Random Coordinate Langevin Monte Carlo [20.423417125810868]
Langevin Monte Carlo (LMC) is a popular Markov chain Monte Carlo sampling method.
We propose a new sampling method: Random Coordinate LMC.
arXiv Detail & Related papers (2020-10-03T18:18:11Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.