Enabling Quantum Speedup of Markov Chains using a Multi-level Approach
- URL: http://arxiv.org/abs/2210.14088v1
- Date: Tue, 25 Oct 2022 15:17:52 GMT
- Title: Enabling Quantum Speedup of Markov Chains using a Multi-level Approach
- Authors: Xiantao Li
- Abstract summary: Quantum speedup for mixing a Markov chain can be achieved based on the construction of slowly-varying $r$ Markov chains.
We show that the density function of a low-resolution Markov chain can be used to warm start the Markov chain with high resolution.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum speedup for mixing a Markov chain can be achieved based on the
construction of slowly-varying $r$ Markov chains where the initial chain can be
easily prepared and the spectral gaps have uniform lower bound. The overall
complexity is proportional to $r$. We present a multi-level approach to
construct such a sequence of $r$ Markov chains by varying a resolution
parameter $h.$ We show that the density function of a low-resolution Markov
chain can be used to warm start the Markov chain with high resolution. We prove
that in terms of the chain length the new algorithm has $O(1)$ complexity
rather than $O(r).$
Related papers
- Stochastic Quantum Sampling for Non-Logconcave Distributions and
Estimating Partition Functions [13.16814860487575]
We present quantum algorithms for sampling from nonlogconcave probability distributions.
$f$ can be written as a finite sum $f(x):= frac1Nsum_k=1N f_k(x)$.
arXiv Detail & Related papers (2023-10-17T17:55:32Z) - Ito Diffusion Approximation of Universal Ito Chains for Sampling, Optimization and Boosting [64.0722630873758]
We consider rather general and broad class of Markov chains, Ito chains, that look like Euler-Maryama discretization of some Differential Equation.
We prove the bound in $W_2$-distance between the laws of our Ito chain and differential equation.
arXiv Detail & Related papers (2023-10-09T18:38:56Z) - Stochastic Gradient Descent under Markovian Sampling Schemes [3.04585143845864]
We study a variation of vanilla gradient descent where the only has access to a Markovian sampling scheme.
We focus on obtaining rates of convergence under the least restrictive assumptions possible on the underlying Markov chain.
arXiv Detail & Related papers (2023-02-28T09:18:00Z) - A rapidly mixing Markov chain from any gapped quantum many-body system [2.321323878201932]
We consider a bit string $x$ from a $pi(x)=|langle x|psirangle|2$, where $psi$ is the unique ground state of a local Hamiltonian $H$.
Our main result describes a direct link between the inverse spectral gap of $H$ and the mixing time of an associated continuous-time Markov Chain with steady state $pi$.
arXiv Detail & Related papers (2022-07-14T16:38:42Z) - Space-efficient Quantization Method for Reversible Markov Chains [1.558664512158522]
Szegedy showed how to construct a quantum walk $W(P)$ for any reversible Markov chain $P$
We show that it is possible to avoid this doubling of state space for certain Markov chains.
arXiv Detail & Related papers (2022-06-14T14:41:56Z) - Neural Inverse Kinematics [72.85330210991508]
Inverse kinematic (IK) methods recover the parameters of the joints, given the desired position of selected elements in the kinematic chain.
We propose a neural IK method that employs the hierarchical structure of the problem to sequentially sample valid joint angles conditioned on the desired position.
arXiv Detail & Related papers (2022-05-22T14:44:07Z) - An Information-Theoretic Approach for Automatically Determining the
Number of States when Aggregating Markov Chains [12.716429755564821]
We show that an augmented value-of-information-based approach to aggregating Markov chains facilitates the determination of the number of state groups.
The optimal state-group count coincides with the case where the complexity of the reduced-order chain is balanced against the mutual dependence between the original- and reduced-order chain dynamics.
arXiv Detail & Related papers (2021-07-05T05:36:04Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Navigating to the Best Policy in Markov Decision Processes [68.8204255655161]
We investigate the active pure exploration problem in Markov Decision Processes.
Agent sequentially selects actions and, from the resulting system trajectory, aims at the best as fast as possible.
arXiv Detail & Related papers (2021-06-05T09:16:28Z) - On the Almost Sure Convergence of Stochastic Gradient Descent in
Non-Convex Problems [75.58134963501094]
This paper analyzes the trajectories of gradient descent (SGD)
We show that SGD avoids saddle points/manifolds with $1$ for strict step-size policies.
arXiv Detail & Related papers (2020-06-19T14:11:26Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.