Quantum Speedups for Multiproposal MCMC
- URL: http://arxiv.org/abs/2312.01402v3
- Date: Tue, 18 Feb 2025 14:09:01 GMT
- Title: Quantum Speedups for Multiproposal MCMC
- Authors: Chin-Yi Lin, Kuo-Chin Chen, Philippe Lemey, Marc A. Suchard, Andrew J. Holbrook, Min-Hsiu Hsieh,
- Abstract summary: We present a new, faster quantum multiproposal MCMC strategy, QPMCMC2.
QPMCMC2 requires only $mathcalO(P)$ target evaluations and $mathcalO(log P)$ qubits when computing over a large number of proposals $P$.
We demonstrate this flexibility by applying QPMCMC2 to novel Ising-type models built on bacterial evolutionary networks.
- Score: 8.580097282861725
- License:
- Abstract: Multiproposal Markov chain Monte Carlo (MCMC) algorithms choose from multiple proposals to generate their next chain step in order to sample from challenging target distributions more efficiently. However, on classical machines, these algorithms require $\mathcal{O}(P)$ target evaluations for each Markov chain step when choosing from $P$ proposals. Recent work demonstrates the possibility of quadratic quantum speedups for one such multiproposal MCMC algorithm. After generating $P$ proposals, this quantum parallel MCMC (QPMCMC) algorithm requires only $\mathcal{O}(\sqrt{P})$ target evaluations at each step, outperforming its classical counterpart. However, generating $P$ proposals using classical computers still requires $\mathcal{O}(P)$ time complexity, resulting in the overall complexity of QPMCMC remaining $\mathcal{O}(P)$. Here, we present a new, faster quantum multiproposal MCMC strategy, QPMCMC2. With a specially designed Tjelmeland distribution that generates proposals close to the input state, QPMCMC2 requires only $\mathcal{O}(1)$ target evaluations and $\mathcal{O}(\log P)$ qubits when computing over a large number of proposals $P$. Unlike its slower predecessor, the QPMCMC2 Markov kernel (1) maintains detailed balance exactly and (2) is fully explicit for a large class of graphical models. We demonstrate this flexibility by applying QPMCMC2 to novel Ising-type models built on bacterial evolutionary networks and obtain significant speedups for Bayesian ancestral trait reconstruction for 248 observed salmonella bacteria.
Related papers
- Dividing quantum circuits for time evolution of stochastic processes by orthogonal series density estimation [0.0]
Quantum Monte Carlo integration (QMCI) is a quantum algorithm to estimate expectations of random variables.
In this paper, we propose a method to divide $U_X(t)$ based on orthogonal series density estimation.
Our method achieves the circuit depth and total query complexity scaling as $O(sqrtN)$ and $O(N3/2)$, respectively.
arXiv Detail & Related papers (2024-06-04T01:50:21Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - SMC Is All You Need: Parallel Strong Scaling [0.695967916921061]
We develop a fully parallel sequential Monte Carlo (pSMC) method which provably delivers parallel strong scaling.
The pSMC converges to infinitesimal accuracy MSE$=O(varepsilon2)$ with a fixed finite time-complexity Cost$=O(1)$ and with no efficiency leakage.
arXiv Detail & Related papers (2024-02-09T04:13:38Z) - Generalized Quantum Signal Processing [0.6768558752130311]
We present a Generalized Quantum Signal Processing approach, employing general SU(2) rotations as our signal processing operators.
Our approach lifts all practical restrictions on the family of achievable transformations, with the sole remaining condition being that $|P|leq 1$.
In cases where only $P$ is known, we provide an efficient GPU optimization capable of identifying in under a minute of time, a corresponding $Q$ for degree on the order of $107$.
arXiv Detail & Related papers (2023-08-03T01:51:52Z) - Private estimation algorithms for stochastic block models and mixture
models [63.07482515700984]
General tools for designing efficient private estimation algorithms.
First efficient $(epsilon, delta)$-differentially private algorithm for both weak recovery and exact recovery.
arXiv Detail & Related papers (2023-01-11T09:12:28Z) - Reward-Mixing MDPs with a Few Latent Contexts are Learnable [75.17357040707347]
We consider episodic reinforcement learning in reward-mixing Markov decision processes (RMMDPs)
Our goal is to learn a near-optimal policy that nearly maximizes the $H$ time-step cumulative rewards in such a model.
arXiv Detail & Related papers (2022-10-05T22:52:00Z) - Quantum Resources Required to Block-Encode a Matrix of Classical Data [56.508135743727934]
We provide circuit-level implementations and resource estimates for several methods of block-encoding a dense $Ntimes N$ matrix of classical data to precision $epsilon$.
We examine resource tradeoffs between the different approaches and explore implementations of two separate models of quantum random access memory (QRAM)
Our results go beyond simple query complexity and provide a clear picture into the resource costs when large amounts of classical data are assumed to be accessible to quantum algorithms.
arXiv Detail & Related papers (2022-06-07T18:00:01Z) - A quantum parallel Markov chain Monte Carlo [0.0]
We use the Gumbel-max trick to turn the generalized accept-reject step into a discrete optimization problem.
We embed target density evaluations within a well-known extension of Grover's quantum search algorithm.
arXiv Detail & Related papers (2021-12-01T01:25:06Z) - On Effective Parallelization of Monte Carlo Tree Search [51.15940034629022]
Monte Carlo Tree Search (MCTS) is computationally expensive as it requires a substantial number of rollouts to construct the search tree.
How to design effective parallel MCTS algorithms has not been systematically studied and remains poorly understood.
We demonstrate how proposed necessary conditions can be adopted to design more effective parallel MCTS algorithms.
arXiv Detail & Related papers (2020-06-15T21:36:00Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z) - Provably Efficient Model-Free Algorithm for MDPs with Peak Constraints [38.2783003051101]
This paper considers the peak Constrained Markov Decision Process (PCMDP), where the agent chooses the policy to maximize total reward in the finite horizon as well as satisfy constraints at each epoch with probability 1.
We propose a model-free algorithm that converts PCMDP problem to an unconstrained problem and a Q-learning based approach is applied.
arXiv Detail & Related papers (2020-03-11T23:23:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.