Optimal learning of quantum Hamiltonians from high-temperature Gibbs
states
- URL: http://arxiv.org/abs/2108.04842v1
- Date: Tue, 10 Aug 2021 18:00:49 GMT
- Title: Optimal learning of quantum Hamiltonians from high-temperature Gibbs
states
- Authors: Jeongwan Haah, Robin Kothari, Ewin Tang
- Abstract summary: We show how to learn the coefficients of a Hamiltonian to error $varepsilon$ with sample complexity $S = O(log N/(betavarepsilon)2)$ and time linear in the sample size, $O(S N)$.
In the appendix, we show virtually the same algorithm can be used to learn $H$ from a real-time evolution unitary $e-it Hilon in a small $t regime with similar sample and time complexity.
- Score: 0.9453554184019105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of learning a Hamiltonian $H$ to precision
$\varepsilon$, supposing we are given copies of its Gibbs state
$\rho=\exp(-\beta H)/\operatorname{Tr}(\exp(-\beta H))$ at a known inverse
temperature $\beta$. Anshu, Arunachalam, Kuwahara, and Soleimanifar (Nature
Physics, 2021) recently studied the sample complexity (number of copies of
$\rho$ needed) of this problem for geometrically local $N$-qubit Hamiltonians.
In the high-temperature (low $\beta$) regime, their algorithm has sample
complexity poly$(N, 1/\beta,1/\varepsilon)$ and can be implemented with
polynomial, but suboptimal, time complexity.
In this paper, we study the same question for a more general class of
Hamiltonians. We show how to learn the coefficients of a Hamiltonian to error
$\varepsilon$ with sample complexity $S = O(\log N/(\beta\varepsilon)^{2})$ and
time complexity linear in the sample size, $O(S N)$. Furthermore, we prove a
matching lower bound showing that our algorithm's sample complexity is optimal,
and hence our time complexity is also optimal.
In the appendix, we show that virtually the same algorithm can be used to
learn $H$ from a real-time evolution unitary $e^{-it H}$ in a small $t$ regime
with similar sample and time complexity.
Related papers
- Simple and Nearly-Optimal Sampling for Rank-1 Tensor Completion via Gauss-Jordan [49.1574468325115]
We revisit the sample and computational complexity of completing a rank-1 tensor in $otimes_i=1N mathbbRd$.
We present a characterization of the problem which admits an algorithm amounting to Gauss-Jordan on a pair of random linear systems.
arXiv Detail & Related papers (2024-08-10T04:26:19Z) - Measuring quantum relative entropy with finite-size effect [53.64687146666141]
We study the estimation of relative entropy $D(rho|sigma)$ when $sigma$ is known.
Our estimator attains the Cram'er-Rao type bound when the dimension $d$ is fixed.
arXiv Detail & Related papers (2024-06-25T06:07:20Z) - Hamiltonian simulation for low-energy states with optimal time dependence [45.02537589779136]
We consider the task of simulating time evolution under a Hamiltonian $H$ within its low-energy subspace.
We present a quantum algorithm that uses $O(tsqrtlambdaGamma + sqrtlambda/Gammalog (1/epsilon))$ queries to the block-encoding for any $Gamma$.
arXiv Detail & Related papers (2024-04-04T17:58:01Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Information-Computation Tradeoffs for Learning Margin Halfspaces with
Random Classification Noise [50.64137465792738]
We study the problem of PAC $gamma$-margin halfspaces with Random Classification Noise.
We establish an information-computation tradeoff suggesting an inherent gap between the sample complexity of the problem and the sample complexity of computationally efficient algorithms.
arXiv Detail & Related papers (2023-06-28T16:33:39Z) - Learning many-body Hamiltonians with Heisenberg-limited scaling [3.460138063155115]
We propose the first algorithm to achieve the Heisenberg limit for learning interacting $N$-qubit local Hamiltonian.
After a total evolution time of $mathcalO(epsilon-1)$, the proposed algorithm can efficiently estimate any parameter in the $N$-qubit Hamiltonian to $epsilon$-error with high probability.
arXiv Detail & Related papers (2022-10-06T16:30:51Z) - Optimal algorithms for learning quantum phase states [8.736370689844682]
We show that the sample complexity of learning an unknown degree-$d$ phase state is $Theta(nd)$ if we allow separable measurements.
We also consider learning phase states when $f$ has sparsity-$s$, degree-$d$ in its $mathbbF$ representation.
arXiv Detail & Related papers (2022-08-16T17:15:06Z) - Private and polynomial time algorithms for learning Gaussians and beyond [13.947461378608525]
We present a framework for reducing $(varepsilon, delta)$ differentially private (DP) statistical estimation to its non-private counterpart.
We provide the first time $(varepsilon, delta)$-DP algorithm for robust learning of (unrestricted) Gaussians.
arXiv Detail & Related papers (2021-11-22T16:25:51Z) - Enhancing the Quantum Linear Systems Algorithm using Richardson
Extrapolation [0.8057006406834467]
We present a quantum algorithm to solve systems of linear equations of the form $Amathbfx=mathbfb$.
The algorithm achieves an exponential improvement with respect to $N$ over classical methods.
arXiv Detail & Related papers (2020-09-09T18:00:09Z) - An Optimal Separation of Randomized and Quantum Query Complexity [67.19751155411075]
We prove that for every decision tree, the absolute values of the Fourier coefficients of a given order $ellsqrtbinomdell (1+log n)ell-1,$ sum to at most $cellsqrtbinomdell (1+log n)ell-1,$ where $n$ is the number of variables, $d$ is the tree depth, and $c>0$ is an absolute constant.
arXiv Detail & Related papers (2020-08-24T06:50:57Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.