Hamiltonian Locality Testing via Trotterized Postselection
- URL: http://arxiv.org/abs/2505.06478v1
- Date: Sat, 10 May 2025 00:33:46 GMT
- Title: Hamiltonian Locality Testing via Trotterized Postselection
- Authors: John Kallaugher, Daniel Liang,
- Abstract summary: The (tolerant) Hamiltonian locality testing problem, introduced in [Bluhm, Caro,Oufkir 24], is to determine whether a Hamiltonian $H$ is $varepsilon_1$-close to being $k$-local.<n>We show that if we are allowed reverse time evolution, this lower bound is tight, giving a matching $textOleft(frac1varepsilon1right)$ evolution time.
- Score: 0.6138671548064355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The (tolerant) Hamiltonian locality testing problem, introduced in [Bluhm, Caro,Oufkir `24], is to determine whether a Hamiltonian $H$ is $\varepsilon_1$-close to being $k$-local (i.e. can be written as the sum of weight-$k$ Pauli operators) or $\varepsilon_2$-far from any $k$-local Hamiltonian, given access to its time evolution operator and using as little total evolution time as possible, with distance typically defined by the normalized Frobenius norm. We give the tightest known bounds for this problem, proving an $\text{O}\left(\sqrt{\frac{\varepsilon_2}{(\varepsilon_2-\varepsilon_1)^5}}\right)$ evolution time upper bound and an $\Omega\left(\frac{1}{\varepsilon_2-\varepsilon_1}\right)$ lower bound. Our algorithm does not require reverse time evolution or controlled application of the time evolution operator, although our lower bound applies to algorithms using either tool. Furthermore, we show that if we are allowed reverse time evolution, this lower bound is tight, giving a matching $\text{O}\left(\frac{1}{\varepsilon_2-\varepsilon_1}\right)$ evolution time algorithm.
Related papers
- Learning the structure of any Hamiltonian from minimal assumptions [2.810160553339817]
We study the problem of learning an unknown quantum many-body Hamiltonian $H$ from black-box queries to its time evolution.<n>We present algorithms to learn any $n$-qubit Hamiltonian, which do not need to know the Hamiltonian terms in advance.
arXiv Detail & Related papers (2024-10-29T00:43:33Z) - Structure learning of Hamiltonians from real-time evolution [22.397920564324973]
We present a new, general approach to Hamiltonian learning that not only solves the challenging structure learning variant, but also resolves other open questions in the area.<n>Our algorithm recovers the Hamiltonian to $varepsilon$ error with total evolution time $O(log (n)/varepsilon)$, and has the following appealing properties.<n>As an application, we can also learn Hamiltonians exhibiting power-law decay up to accuracy $varepsilon$ with total evolution time beating the standard limit of $1/varepsilon2$.
arXiv Detail & Related papers (2024-04-30T18:00:00Z) - Simple algorithms to test and learn local Hamiltonians [0.0]
We consider the problems of testing and learning an $n$-qubit $k$-local Hamiltonian from queries to its evolution operator.
For learning up to error $epsilon$, we show that $exp(O(k2+klog(1/epsilon))$ queries suffice.
arXiv Detail & Related papers (2024-04-09T13:08:28Z) - $\oldsymbol{\alpha_{>}(\epsilon) = \alpha_{<}(\epsilon)}$ For The
Margolus-Levitin Quantum Speed Limit Bound [0.0]
I show that $alpha_>(epsilon)$ is indeed equal to $alpha_(epsilon)$.
I also point out a numerical stability issue in computing $alpha_>(epsilon)$.
arXiv Detail & Related papers (2023-05-17T10:07:31Z) - Perseus: A Simple and Optimal High-Order Method for Variational
Inequalities [81.32967242727152]
A VI involves finding $xstar in mathcalX$ such that $langle F(x), x - xstarrangle geq 0$ for all $x in mathcalX$.
We propose a $pth$-order method that does textitnot require any line search procedure and provably converges to a weak solution at a rate of $O(epsilon-2/(p+1))$.
arXiv Detail & Related papers (2022-05-06T13:29:14Z) - Threshold Phenomena in Learning Halfspaces with Massart Noise [56.01192577666607]
We study the problem of PAC learning halfspaces on $mathbbRd$ with Massart noise under Gaussian marginals.
Our results qualitatively characterize the complexity of learning halfspaces in the Massart model.
arXiv Detail & Related papers (2021-08-19T16:16:48Z) - Optimal learning of quantum Hamiltonians from high-temperature Gibbs
states [0.9453554184019105]
We show how to learn the coefficients of a Hamiltonian to error $varepsilon$ with sample complexity $S = O(log N/(betavarepsilon)2)$ and time linear in the sample size, $O(S N)$.
In the appendix, we show virtually the same algorithm can be used to learn $H$ from a real-time evolution unitary $e-it Hilon in a small $t regime with similar sample and time complexity.
arXiv Detail & Related papers (2021-08-10T18:00:49Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - An Optimal Separation of Randomized and Quantum Query Complexity [67.19751155411075]
We prove that for every decision tree, the absolute values of the Fourier coefficients of a given order $ellsqrtbinomdell (1+log n)ell-1,$ sum to at most $cellsqrtbinomdell (1+log n)ell-1,$ where $n$ is the number of variables, $d$ is the tree depth, and $c>0$ is an absolute constant.
arXiv Detail & Related papers (2020-08-24T06:50:57Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Agnostic Q-learning with Function Approximation in Deterministic
Systems: Tight Bounds on Approximation Error and Sample Complexity [94.37110094442136]
We study the problem of agnostic $Q$-learning with function approximation in deterministic systems.
We show that if $delta = Oleft(rho/sqrtdim_Eright)$, then one can find the optimal policy using $Oleft(dim_Eright)$.
arXiv Detail & Related papers (2020-02-17T18:41:49Z) - Tight Quantum Lower Bound for Approximate Counting with Quantum States [49.6558487240078]
We prove tight lower bounds for the following variant of the counting problem considered by Aaronson, Kothari, Kretschmer, and Thaler ( 2020)
The task is to distinguish whether an input set $xsubseteq [n]$ has size either $k$ or $k'=(1+varepsilon)k$.
arXiv Detail & Related papers (2020-02-17T10:53:50Z) - On the Complexity of Minimizing Convex Finite Sums Without Using the
Indices of the Individual Functions [62.01594253618911]
We exploit the finite noise structure of finite sums to derive a matching $O(n2)$-upper bound under the global oracle model.
Following a similar approach, we propose a novel adaptation of SVRG which is both emphcompatible with oracles, and achieves complexity bounds of $tildeO(n2+nsqrtL/mu)log (1/epsilon)$ and $O(nsqrtL/epsilon)$, for $mu>0$ and $mu=0$
arXiv Detail & Related papers (2020-02-09T03:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.