Testing and learning structured quantum Hamiltonians
- URL: http://arxiv.org/abs/2411.00082v1
- Date: Thu, 31 Oct 2024 17:54:13 GMT
- Title: Testing and learning structured quantum Hamiltonians
- Authors: Srinivasan Arunachalam, Arkopal Dutt, Francisco Escudero GutiƩrrez,
- Abstract summary: We consider the problems of testing and learning an unknown $n$qubit Hamiltonian $H$ from queries to its evolution operator $e-iHt$ under the normalized Frobenius norm.
- Score: 4.137418441736385
- License:
- Abstract: We consider the problems of testing and learning an unknown $n$-qubit Hamiltonian $H$ from queries to its evolution operator $e^{-iHt}$ under the normalized Frobenius norm. We prove: 1. Local Hamiltonians: We give a tolerant testing protocol to decide if $H$ is $\epsilon_1$-close to $k$-local or $\epsilon_2$-far from $k$-local, with $O(1/(\epsilon_2-\epsilon_1)^{4})$ queries, solving open questions posed in a recent work by Bluhm et al. For learning a $k$-local $H$ up to error $\epsilon$, we give a protocol with query complexity $\exp(O(k^2+k\log(1/\epsilon)))$ independent of $n$, by leveraging the non-commutative Bohnenblust-Hille inequality. 2. Sparse Hamiltonians: We give a protocol to test if $H$ is $\epsilon_1$-close to being $s$-sparse (in the Pauli basis) or $\epsilon_2$-far from being $s$-sparse, with $O(s^{6}/(\epsilon_2^2-\epsilon_1^2)^{6})$ queries. For learning up to error $\epsilon$, we show that $O(s^{4}/\epsilon^{8})$ queries suffice. 3. Learning without memory: The learning results stated above have no dependence on $n$, but require $n$-qubit quantum memory. We give subroutines that allow us to learn without memory; increasing the query complexity by a $(\log n)$-factor in the local case and an $n$-factor in the sparse case. 4. Testing without memory: We give a new subroutine called Pauli hashing, which allows one to tolerantly test $s$-sparse Hamiltonians with $O(s^{14}/(\epsilon_2^2-\epsilon_1^2)^{18})$ queries. A key ingredient is showing that $s$-sparse Pauli channels can be tolerantly tested under the diamond norm with $O(s^2/(\epsilon_2-\epsilon_1)^6)$ queries. Along the way, we prove new structural theorems for local and sparse Hamiltonians. We complement our learning results with polynomially weaker lower bounds. Furthermore, our algorithms use short time evolutions and do not assume prior knowledge of the terms in the support of the Pauli spectrum of $H$.
Related papers
- Learning low-degree quantum objects [5.2373060530454625]
We show how to learn low-degree quantum objects up to $varepsilon$-error in $ell$-distance.
Our main technical contributions are new Bohnenblust-Hille inequalities for quantum channels and completely boundedpolynomials.
arXiv Detail & Related papers (2024-05-17T17:36:44Z) - Simple algorithms to test and learn local Hamiltonians [0.0]
We consider the problems of testing and learning an $n$-qubit $k$-local Hamiltonian from queries to its evolution operator.
For learning up to error $epsilon$, we show that $exp(O(k2+klog(1/epsilon))$ queries suffice.
arXiv Detail & Related papers (2024-04-09T13:08:28Z) - Basic quantum subroutines: finding multiple marked elements and summing
numbers [1.1265248232450553]
We show how to find all $k$ marked elements in a list of size $N$ using the optimal number $O(sqrtN k)$ of quantum queries.
arXiv Detail & Related papers (2023-02-20T19:11:44Z) - Active Sampling for Linear Regression Beyond the $\ell_2$ Norm [70.49273459706546]
We study active sampling algorithms for linear regression, which aim to query only a small number of entries of a target vector.
We show that this dependence on $d$ is optimal, up to logarithmic factors.
We also provide the first total sensitivity upper bound $O(dmax1,p/2log2 n)$ for loss functions with at most degree $p$ growth.
arXiv Detail & Related papers (2021-11-09T00:20:01Z) - Learning low-degree functions from a logarithmic number of random
queries [77.34726150561087]
We prove that for any integer $ninmathbbN$, $din1,ldots,n$ and any $varepsilon,deltain(0,1)$, a bounded function $f:-1,1nto[-1,1]$ of degree at most $d$ can be learned.
arXiv Detail & Related papers (2021-09-21T13:19:04Z) - Optimal learning of quantum Hamiltonians from high-temperature Gibbs
states [0.9453554184019105]
We show how to learn the coefficients of a Hamiltonian to error $varepsilon$ with sample complexity $S = O(log N/(betavarepsilon)2)$ and time linear in the sample size, $O(S N)$.
In the appendix, we show virtually the same algorithm can be used to learn $H$ from a real-time evolution unitary $e-it Hilon in a small $t regime with similar sample and time complexity.
arXiv Detail & Related papers (2021-08-10T18:00:49Z) - On Avoiding the Union Bound When Answering Multiple Differentially
Private Queries [49.453751858361265]
We give an algorithm for this task that achieves an expected $ell_infty$ error bound of $O(frac1epsilonsqrtk log frac1delta)$.
On the other hand, the algorithm of Dagan and Kur has a remarkable advantage that the $ell_infty$ error bound of $O(frac1epsilonsqrtk log frac1delta)$ holds not only in expectation but always.
arXiv Detail & Related papers (2020-12-16T17:58:45Z) - Testing and reconstruction via decision trees [19.304587350775385]
We study sublinear and local computation algorithms for decision trees, focusing on testing and reconstruction.
A tester that runs in $mathrmpoly(log s, 1/varepsilon)cdot nlog n$ time makes $mathrmpoly(log s,1/varepsilon)cdot n$ queries to an unknown function.
arXiv Detail & Related papers (2020-12-16T04:18:00Z) - An Optimal Separation of Randomized and Quantum Query Complexity [67.19751155411075]
We prove that for every decision tree, the absolute values of the Fourier coefficients of a given order $ellsqrtbinomdell (1+log n)ell-1,$ sum to at most $cellsqrtbinomdell (1+log n)ell-1,$ where $n$ is the number of variables, $d$ is the tree depth, and $c>0$ is an absolute constant.
arXiv Detail & Related papers (2020-08-24T06:50:57Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - Agnostic Q-learning with Function Approximation in Deterministic
Systems: Tight Bounds on Approximation Error and Sample Complexity [94.37110094442136]
We study the problem of agnostic $Q$-learning with function approximation in deterministic systems.
We show that if $delta = Oleft(rho/sqrtdim_Eright)$, then one can find the optimal policy using $Oleft(dim_Eright)$.
arXiv Detail & Related papers (2020-02-17T18:41:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.