Quantum Speedups for Markov Chain Monte Carlo Methods with Application to Optimization
- URL: http://arxiv.org/abs/2504.03626v1
- Date: Fri, 04 Apr 2025 17:44:22 GMT
- Title: Quantum Speedups for Markov Chain Monte Carlo Methods with Application to Optimization
- Authors: Guneykan Ozgul, Xiantao Li, Mehrdad Mahdavi, Chunhao Wang,
- Abstract summary: We propose quantum algorithms that provide provable speedups for Markov Chain Monte Carlo methods.<n>By introducing novel techniques for gradient estimation, our algorithms improve the complexities of classical samplers.
- Score: 12.054017903540194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose quantum algorithms that provide provable speedups for Markov Chain Monte Carlo (MCMC) methods commonly used for sampling from probability distributions of the form $\pi \propto e^{-f}$, where $f$ is a potential function. Our first approach considers Gibbs sampling for finite-sum potentials in the stochastic setting, employing an oracle that provides gradients of individual functions. In the second setting, we consider access only to a stochastic evaluation oracle, allowing simultaneous queries at two points of the potential function under the same stochastic parameter. By introducing novel techniques for stochastic gradient estimation, our algorithms improve the gradient and evaluation complexities of classical samplers, such as Hamiltonian Monte Carlo (HMC) and Langevin Monte Carlo (LMC) in terms of dimension, precision, and other problem-dependent parameters. Furthermore, we achieve quantum speedups in optimization, particularly for minimizing non-smooth and approximately convex functions that commonly appear in empirical risk minimization problems.
Related papers
- Neural Network Approach to Stochastic Dynamics for Smooth Multimodal Density Estimation [0.0]
We extent Metropolis-Adjusted Langevin Diffusion algorithm by modelling the Eigenity of precondition matrix as a random matrix.
The proposed method provides fully adaptation mechanisms to tune proposal densities to exploits and adapts the geometry of local structures of statistical models.
arXiv Detail & Related papers (2025-03-22T16:17:12Z) - Non-linear Quantum Monte Carlo [1.237454174824584]
Quantum computing provides a quadratic speedup over classical Monte Carlo methods for mean estimation.<n>We propose a quantum-inside-quantum Monte Carlo algorithm that achieves such a speedup for a broad class of non-linear estimation problems.
arXiv Detail & Related papers (2025-02-07T17:13:27Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Quantum speedups for stochastic optimization [18.32349609443295]
We consider the problem of minimizing a continuous function given quantumvitzvariance to an oracle.
We provide two new methods for the special case of minimizing a Lipsch avvitz function.
arXiv Detail & Related papers (2023-08-03T07:39:10Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.<n>We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.<n>Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Quantum algorithm for stochastic optimal stopping problems with
applications in finance [60.54699116238087]
The famous least squares Monte Carlo (LSM) algorithm combines linear least square regression with Monte Carlo simulation to approximately solve problems in optimal stopping theory.
We propose a quantum LSM based on quantum access to a process, on quantum circuits for computing the optimal stopping times, and on quantum techniques for Monte Carlo.
arXiv Detail & Related papers (2021-11-30T12:21:41Z) - A Proximal Algorithm for Sampling from Non-smooth Potentials [10.980294435643398]
We propose a novel MCMC algorithm for sampling from non-smooth potentials.
Our method is based on the proximal bundle method and an alternating sampling framework.
One key contribution of this work is a fast algorithm that realizes the restricted Gaussian oracle for any convex non-smooth potential.
arXiv Detail & Related papers (2021-10-09T15:26:07Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Unified Convergence Analysis for Adaptive Optimization with Moving Average Estimator [75.05106948314956]
We show that an increasing large momentum parameter for the first-order moment is sufficient for adaptive scaling.<n>We also give insights for increasing the momentum in a stagewise manner in accordance with stagewise decreasing step size.
arXiv Detail & Related papers (2021-04-30T08:50:24Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Marginalised Gaussian Processes with Nested Sampling [10.495114898741203]
Gaussian Process (GPs) models are a rich distribution over functions with inductive biases controlled by a kernel function.
This work presents an alternative learning procedure where the hyperparameters of the kernel function are marginalised using Nested Sampling (NS)
arXiv Detail & Related papers (2020-10-30T16:04:35Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.