More Quantum Speedups for Multiproposal MCMC
- URL: http://arxiv.org/abs/2312.01402v2
- Date: Tue, 5 Dec 2023 10:42:31 GMT
- Title: More Quantum Speedups for Multiproposal MCMC
- Authors: Chin-Yi Lin, Kuo-Chin Chen, Philippe Lemey, Marc A. Suchard, Andrew J.
Holbrook, Min-Hsiu Hsieh
- Abstract summary: Multiproposal Markov chain Monte Carlo (MCMC) algorithms choose from multiple proposals at each iteration in order to sample from challenging target distributions more efficiently.
Recent work demonstrates the possibility of quadratic quantum speedups for one such multiproposal MCMC algorithm.
We present a fast new quantum multiproposal MCMC strategy, QPMCMC2, that only requires $mathcalO(1)$ target evaluations and $mathcalO(log P)$ qubits.
- Score: 9.09160682938666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multiproposal Markov chain Monte Carlo (MCMC) algorithms choose from multiple
proposals at each iteration in order to sample from challenging target
distributions more efficiently. Recent work demonstrates the possibility of
quadratic quantum speedups for one such multiproposal MCMC algorithm. Using $P$
proposals, this quantum parallel MCMC QPMCMC algorithm requires only
$\mathcal{O}(\sqrt{P})$ target evaluations at each step. Here, we present a
fast new quantum multiproposal MCMC strategy, QPMCMC2, that only requires
$\mathcal{O}(1)$ target evaluations and $\mathcal{O}(\log P)$ qubits. Unlike
its slower predecessor, the QPMCMC2 Markov kernel (1) maintains detailed
balance exactly and (2) is fully explicit for a large class of graphical
models. We demonstrate this flexibility by applying QPMCMC2 to novel Ising-type
models built on bacterial evolutionary networks and obtain significant speedups
for Bayesian ancestral trait reconstruction for 248 observed salmonella
bacteria.
Related papers
- AutoStep: Locally adaptive involutive MCMC [51.186543293659376]
AutoStep MCMC selects an appropriate step size at each iteration adapted to the local geometry of the target distribution.
We show that AutoStep MCMC is competitive with state-of-the-art methods in terms of effective sample size per unit cost.
arXiv Detail & Related papers (2024-10-24T17:17:11Z) - Convergence Bounds for Sequential Monte Carlo on Multimodal Distributions using Soft Decomposition [6.872242798058046]
We prove bounds on the variance of a function $f$ under the empirical measure of the samples obtained by the Sequential Monte Carlo (SMC) algorithm.
We show that bounds can be obtained in the truly multi-modal setting, with mixing times that depend on local MCMC dynamics.
arXiv Detail & Related papers (2024-05-29T22:43:45Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - SMC Is All You Need: Parallel Strong Scaling [0.695967916921061]
We develop a fully parallel sequential Monte Carlo (pSMC) method which provably delivers parallel strong scaling.
The pSMC converges to infinitesimal accuracy MSE$=O(varepsilon2)$ with a fixed finite time-complexity Cost$=O(1)$ and with no efficiency leakage.
arXiv Detail & Related papers (2024-02-09T04:13:38Z) - Bayesian Decision Trees Inspired from Evolutionary Algorithms [64.80360020499555]
We propose a replacement of the Markov Chain Monte Carlo (MCMC) with an inherently parallel algorithm, the Sequential Monte Carlo (SMC)
Experiments show that SMC combined with the Evolutionary Algorithms (EA) can produce more accurate results compared to MCMC in 100 times fewer iterations.
arXiv Detail & Related papers (2023-05-30T06:17:35Z) - Improving multiple-try Metropolis with local balancing [0.0]
Multiple-try Metropolis (MTM) is a popular Markov chain Monte Carlo method.
We show both theoretically and empirically that this weight function induces pathological behaviours in high dimensions.
We propose to instead use weight functions akin to the locally-balanced proposal distributions of Zanella ( 2020)
arXiv Detail & Related papers (2022-11-21T16:14:17Z) - A quantum parallel Markov chain Monte Carlo [0.0]
We use the Gumbel-max trick to turn the generalized accept-reject step into a discrete optimization problem.
We embed target density evaluations within a well-known extension of Grover's quantum search algorithm.
arXiv Detail & Related papers (2021-12-01T01:25:06Z) - Variational Combinatorial Sequential Monte Carlo Methods for Bayesian
Phylogenetic Inference [4.339931151475307]
We introduce Vari Combinatorial Monte Carlo (VCSMC), a powerful framework that establishes variational search to learn over intricate structures.
We show that VCSMC and CSMC are efficient and explore higher probability spaces than existing methods on a range of tasks.
arXiv Detail & Related papers (2021-05-31T19:44:24Z) - Weighted QMIX: Expanding Monotonic Value Function Factorisation for Deep
Multi-Agent Reinforcement Learning [66.94149388181343]
We present a new version of a popular $Q$-learning algorithm for MARL.
We show that it can recover the optimal policy even with access to $Q*$.
We also demonstrate improved performance on predator-prey and challenging multi-agent StarCraft benchmark tasks.
arXiv Detail & Related papers (2020-06-18T18:34:50Z) - On Effective Parallelization of Monte Carlo Tree Search [51.15940034629022]
Monte Carlo Tree Search (MCTS) is computationally expensive as it requires a substantial number of rollouts to construct the search tree.
How to design effective parallel MCTS algorithms has not been systematically studied and remains poorly understood.
We demonstrate how proposed necessary conditions can be adopted to design more effective parallel MCTS algorithms.
arXiv Detail & Related papers (2020-06-15T21:36:00Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.