A rapidly mixing Markov chain from any gapped quantum many-body system
- URL: http://arxiv.org/abs/2207.07044v2
- Date: Mon, 30 Oct 2023 20:36:59 GMT
- Title: A rapidly mixing Markov chain from any gapped quantum many-body system
- Authors: Sergey Bravyi, Giuseppe Carleo, David Gosset, Yinchen Liu
- Abstract summary: We consider a bit string $x$ from a $pi(x)=|langle x|psirangle|2$, where $psi$ is the unique ground state of a local Hamiltonian $H$.
Our main result describes a direct link between the inverse spectral gap of $H$ and the mixing time of an associated continuous-time Markov Chain with steady state $pi$.
- Score: 2.321323878201932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the computational task of sampling a bit string $x$ from a
distribution $\pi(x)=|\langle x|\psi\rangle|^2$, where $\psi$ is the unique
ground state of a local Hamiltonian $H$. Our main result describes a direct
link between the inverse spectral gap of $H$ and the mixing time of an
associated continuous-time Markov Chain with steady state $\pi$. The Markov
Chain can be implemented efficiently whenever ratios of ground state amplitudes
$\langle y|\psi\rangle/\langle x|\psi\rangle$ are efficiently computable, the
spectral gap of $H$ is at least inverse polynomial in the system size, and the
starting state of the chain satisfies a mild technical condition that can be
efficiently checked. This extends a previously known relationship between
sign-problem free Hamiltonians and Markov chains. The tool which enables this
generalization is the so-called fixed-node Hamiltonian construction, previously
used in Quantum Monte Carlo simulations to address the fermionic sign problem.
We implement the proposed sampling algorithm numerically and use it to sample
from the ground state of Haldane-Shastry Hamiltonian with up to 56 qubits. We
observe empirically that our Markov chain based on the fixed-node Hamiltonian
mixes more rapidly than the standard Metropolis-Hastings Markov chain.
Related papers
- Slow Mixing of Quantum Gibbs Samplers [47.373245682678515]
We present a quantum generalization of these tools through a generic bottleneck lemma.
This lemma focuses on quantum measures of distance, analogous to the classical Hamming distance but rooted in uniquely quantum principles.
Even with sublinear barriers, we use Feynman-Kac techniques to lift classical to quantum ones establishing tight lower bound $T_mathrmmix = 2Omega(nalpha)$.
arXiv Detail & Related papers (2024-11-06T22:51:27Z) - Provable Benefit of Annealed Langevin Monte Carlo for Non-log-concave Sampling [28.931489333515618]
We establish an oracle complexity of $widetildeOleft(fracdbeta2cal A2varepsilon6right)$ for simple annealed Langevin Monte Carlo algorithm.
We show that $cal A$ represents the action of a curve of probability measures interpolating the target distribution $pi$ and a readily sampleable distribution.
arXiv Detail & Related papers (2024-07-24T02:15:48Z) - Ito Diffusion Approximation of Universal Ito Chains for Sampling, Optimization and Boosting [64.0722630873758]
We consider rather general and broad class of Markov chains, Ito chains, that look like Euler-Maryama discretization of some Differential Equation.
We prove the bound in $W_2$-distance between the laws of our Ito chain and differential equation.
arXiv Detail & Related papers (2023-10-09T18:38:56Z) - Enabling Quantum Speedup of Markov Chains using a Multi-level Approach [0.0]
Quantum speedup for mixing a Markov chain can be achieved based on the construction of slowly-varying $r$ Markov chains.
We show that the density function of a low-resolution Markov chain can be used to warm start the Markov chain with high resolution.
arXiv Detail & Related papers (2022-10-25T15:17:52Z) - Space-efficient Quantization Method for Reversible Markov Chains [1.558664512158522]
Szegedy showed how to construct a quantum walk $W(P)$ for any reversible Markov chain $P$
We show that it is possible to avoid this doubling of state space for certain Markov chains.
arXiv Detail & Related papers (2022-06-14T14:41:56Z) - How to simulate quantum measurement without computing marginals [3.222802562733787]
We describe and analyze algorithms for classically computation measurement of an $n$-qubit quantum state $psi$ in the standard basis.
Our algorithms reduce the sampling task to computing poly(n)$ amplitudes of $n$-qubit states.
arXiv Detail & Related papers (2021-12-15T21:44:05Z) - Random quantum circuits transform local noise into global white noise [118.18170052022323]
We study the distribution over measurement outcomes of noisy random quantum circuits in the low-fidelity regime.
For local noise that is sufficiently weak and unital, correlations (measured by the linear cross-entropy benchmark) between the output distribution $p_textnoisy$ of a generic noisy circuit instance shrink exponentially.
If the noise is incoherent, the output distribution approaches the uniform distribution $p_textunif$ at precisely the same rate.
arXiv Detail & Related papers (2021-11-29T19:26:28Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Stoquasticity in circuit QED [78.980148137396]
We show that scalable sign-problem free path integral Monte Carlo simulations can typically be performed for such systems.
We corroborate the recent finding that an effective, non-stoquastic qubit Hamiltonian can emerge in a system of capacitively coupled flux qubits.
arXiv Detail & Related papers (2020-11-02T16:41:28Z) - A Matrix Chernoff Bound for Markov Chains and Its Application to
Co-occurrence Matrices [30.49132891333353]
We prove a Chernoff-type bound for sums of matrix-valued random variables sampled via a regular (aperiodic and irreducible) finite Markov chain.
Our proof is based on the matrix expander (regular undirected graph) Chernoff bound [Garg et al. STOC '18] and scalar Chernoff-Hoeffding bounds for Markov chains.
Our matrix Chernoff bound for Markov chains can be applied to analyze the behavior of co-occurrence statistics for sequential data.
arXiv Detail & Related papers (2020-08-06T05:44:54Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.