Learning quantum Hamiltonians at any temperature in polynomial time
- URL: http://arxiv.org/abs/2310.02243v1
- Date: Tue, 3 Oct 2023 17:50:26 GMT
- Title: Learning quantum Hamiltonians at any temperature in polynomial time
- Authors: Ainesh Bakshi, Allen Liu, Ankur Moitra, Ewin Tang
- Abstract summary: We study the problem of learning a local quantum Hamiltonian $H$ given copies of its Gibbs state $rhobeta H/textrmtr(ebeta H)$ at a known inverse temperature.
An algorithm is developed to learn a Hamiltonian on $n$ qubits to precision $epsilon$ with onlyly many copies of the Gibbs state, but which takes exponential time.
- Score: 24.4681191608826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of learning a local quantum Hamiltonian $H$ given copies
of its Gibbs state $\rho = e^{-\beta H}/\textrm{tr}(e^{-\beta H})$ at a known
inverse temperature $\beta>0$. Anshu, Arunachalam, Kuwahara, and Soleimanifar
(arXiv:2004.07266) gave an algorithm to learn a Hamiltonian on $n$ qubits to
precision $\epsilon$ with only polynomially many copies of the Gibbs state, but
which takes exponential time. Obtaining a computationally efficient algorithm
has been a major open problem [Alhambra'22 (arXiv:2204.08349)], [Anshu,
Arunachalam'22 (arXiv:2204.08349)], with prior work only resolving this in the
limited cases of high temperature [Haah, Kothari, Tang'21 (arXiv:2108.04842)]
or commuting terms [Anshu, Arunachalam, Kuwahara, Soleimanifar'21]. We fully
resolve this problem, giving a polynomial time algorithm for learning $H$ to
precision $\epsilon$ from polynomially many copies of the Gibbs state at any
constant $\beta > 0$.
Our main technical contribution is a new flat polynomial approximation to the
exponential function, and a translation between multi-variate scalar
polynomials and nested commutators. This enables us to formulate Hamiltonian
learning as a polynomial system. We then show that solving a low-degree
sum-of-squares relaxation of this polynomial system suffices to accurately
learn the Hamiltonian.
Related papers
- Improved algorithms for learning quantum Hamiltonians, via flat polynomials [7.693388437377614]
We give an improved algorithm for learning a quantum Hamiltonian given copies of its Gibbs state, that can succeed at any temperature.
Specifically, we improve over the work of Bakshi, Liu, Moitra, and Tang [BLMT24], by reducing the complexity and dependence to singly exponential in the sample.
arXiv Detail & Related papers (2024-07-05T14:25:22Z) - Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit [75.4661041626338]
We study the problem of gradient descent learning of a single-index target function $f_*(boldsymbolx) = textstylesigma_*left(langleboldsymbolx,boldsymbolthetarangleright)$ under isotropic Gaussian data.
We prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ of arbitrary link function with a sample and runtime complexity of $n asymp T asymp C(q) cdot d
arXiv Detail & Related papers (2024-06-03T17:56:58Z) - Efficient Solution of Point-Line Absolute Pose [52.775981113238046]
We revisit certain problems of pose estimation based on 3D--2D correspondences between features which may be points or lines.
We show experimentally that the resulting solvers are numerically stable and fast.
arXiv Detail & Related papers (2024-04-25T12:09:16Z) - Learning quantum Hamiltonians at any temperature in polynomial time with
Chebyshev and bit complexity [3.2634122554914]
We consider the problem of learning local quantum Hamiltonians given copies of their state at a known inverse temperature.
Our main technical contribution is a new flat approximation of the exponential function based on the Chebyshev expansion.
arXiv Detail & Related papers (2024-02-08T10:42:47Z) - An Improved Classical Singular Value Transformation for Quantum Machine Learning [2.3326951882644553]
We study quantum speedups in quantum machine learning (QML) by analyzing the quantum singular value transformation (QSVT) framework.
Our key insight is to combine the Clenshaw recurrence, an iterative method for computing matrix stability, with sketching techniques to simulate QSVT classically.
arXiv Detail & Related papers (2023-03-02T18:53:03Z) - Quantum and classical low-degree learning via a dimension-free Remez
inequality [52.12931955662553]
We show a new way to relate functions on the hypergrid to their harmonic extensions over the polytorus.
We show the supremum of a function $f$ over products of the cyclic group $exp(2pi i k/K)_k=1K$.
We extend to new spaces a recent line of work citeEI22, CHP, VZ22 that gave similarly efficient methods for learning low-degrees on hypercubes and observables on qubits.
arXiv Detail & Related papers (2023-01-04T04:15:40Z) - Learning many-body Hamiltonians with Heisenberg-limited scaling [3.460138063155115]
We propose the first algorithm to achieve the Heisenberg limit for learning interacting $N$-qubit local Hamiltonian.
After a total evolution time of $mathcalO(epsilon-1)$, the proposed algorithm can efficiently estimate any parameter in the $N$-qubit Hamiltonian to $epsilon$-error with high probability.
arXiv Detail & Related papers (2022-10-06T16:30:51Z) - Cryptographic Hardness of Learning Halfspaces with Massart Noise [59.8587499110224]
We study the complexity of PAC learning halfspaces in the presence of Massart noise.
We show that no-time Massart halfspace learners can achieve error better than $Omega(eta)$, even if the optimal 0-1 error is small.
arXiv Detail & Related papers (2022-07-28T17:50:53Z) - Optimal learning of quantum Hamiltonians from high-temperature Gibbs
states [0.9453554184019105]
We show how to learn the coefficients of a Hamiltonian to error $varepsilon$ with sample complexity $S = O(log N/(betavarepsilon)2)$ and time linear in the sample size, $O(S N)$.
In the appendix, we show virtually the same algorithm can be used to learn $H$ from a real-time evolution unitary $e-it Hilon in a small $t regime with similar sample and time complexity.
arXiv Detail & Related papers (2021-08-10T18:00:49Z) - Hardness of Learning Halfspaces with Massart Noise [56.98280399449707]
We study the complexity of PAC learning halfspaces in the presence of Massart (bounded) noise.
We show that there an exponential gap between the information-theoretically optimal error and the best error that can be achieved by a SQ algorithm.
arXiv Detail & Related papers (2020-12-17T16:43:11Z) - A Polynomial Time Algorithm for Learning Halfspaces with Tsybakov Noise [55.45544638410858]
We study the problem of PAC learning homogeneous halfspaces in the presence of Tsybakov noise.
Our algorithm learns the true halfspace within any accuracy $epsilon$.
arXiv Detail & Related papers (2020-10-04T22:19:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.