Nearly optimal algorithms to learn sparse quantum Hamiltonians in physically motivated distances
- URL: http://arxiv.org/abs/2509.09813v1
- Date: Thu, 11 Sep 2025 19:32:55 GMT
- Title: Nearly optimal algorithms to learn sparse quantum Hamiltonians in physically motivated distances
- Authors: Amira Abbas, Nunzia Cerrato, Francisco Escudero GutiƩrrez, Dmitry Grinko, Francesco Anna Mele, Pulkit Sinha,
- Abstract summary: We study the problem of learning Hamiltonians $H$ that are $s$-sparse in the Pauli basis, given access to their time evolution.<n>We introduce two physically motivated distances between Hamiltonians and design a nearly optimal algorithm with respect to one of these metrics.<n>A new isolation technique, inspired by the Valiant-Vazirani theorem (STOC'85), allows us to query the time evolution of a single Pauli coefficient of a sparse Hamiltonian.
- Score: 1.9938191990312617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of learning Hamiltonians $H$ that are $s$-sparse in the Pauli basis, given access to their time evolution. Although Hamiltonian learning has been extensively investigated, two issues recur in much of the existing literature: the absence of matching lower bounds and the use of mathematically convenient but physically opaque error measures. We address both challenges by introducing two physically motivated distances between Hamiltonians and designing a nearly optimal algorithm with respect to one of these metrics. The first, time-constrained distance, quantifies distinguishability through dynamical evolution up to a bounded time. The second, temperature-constrained distance, captures distinguishability through thermal states at bounded inverse temperatures. We show that $s$-sparse Hamiltonians with bounded operator norm can be learned in both distances with $O(s \log(1/\epsilon))$ experiments and $O(s^2/\epsilon)$ evolution time. For the time-constrained distance, we further establish lower bounds of $\Omega((s/n)\log(1/\epsilon) + s)$ experiments and $\Omega(\sqrt{s}/\epsilon)$ evolution time, demonstrating near-optimality in the number of experiments. As an intermediate result, we obtain an algorithm that learns every Pauli coefficient of $s$-sparse Hamiltonians up to error $\epsilon$ in $O(s\log(1/\epsilon))$ experiments and $O(s/\epsilon)$ evolution time, improving upon several recent results. The source of this improvement is a new isolation technique, inspired by the Valiant-Vazirani theorem (STOC'85), which shows that NP is as easy as detecting unique solutions. This isolation technique allows us to query the time evolution of a single Pauli coefficient of a sparse Hamiltonian--even when the Pauli support of the Hamiltonian is unknown--ultimately enabling us to recover the Pauli support itself.
Related papers
- Hamiltonian Decoded Quantum Interferometry [69.7049555871155]
We introduce Hamiltonian Decoded Quantum Interferometry (HDQI)<n>HDQI utilizes coherent measurements and the symplectic representation of the Pauli group to reduce Gibbs sampling and Hamiltonian Bellians.<n>We show that HDQI efficiently prepares Gibbs states at arbitrary temperatures for a class of physically motivated commuting Hamiltonians.
arXiv Detail & Related papers (2025-10-09T08:06:15Z) - Lower Bounds for Learning Hamiltonians from Time Evolution [5.118083299467833]
We consider the problem of learning Hamiltonians from time evolution.<n>We show that any learning algorithm with inverse time resolution requires super-polynomial total evolution time.
arXiv Detail & Related papers (2025-09-25T01:50:55Z) - Sublinear Time Quantum Sensitivity Sampling [57.356528942341534]
We present a unified framework for quantum sensitivity sampling, extending the advantages of quantum computing to a broad class of classical approximation problems.<n>Our framework provides a streamlined approach for constructing coresets and offers significant runtime improvements in applications such as clustering, regression, and low-rank approximation.
arXiv Detail & Related papers (2025-09-20T20:18:49Z) - Slow Mixing of Quantum Gibbs Samplers [47.373245682678515]
We present a quantum generalization of these tools through a generic bottleneck lemma.<n>This lemma focuses on quantum measures of distance, analogous to the classical Hamming distance but rooted in uniquely quantum principles.<n>We show how to lift classical slow mixing results in the presence of a transverse field using Poisson Feynman-Kac techniques.
arXiv Detail & Related papers (2024-11-06T22:51:27Z) - Learning $k$-body Hamiltonians via compressed sensing [0.5867838258848337]
We study the problem of learning a $k$-body Hamiltonian with $M$ unknown Pauli terms that are not necessarily geometrically local.<n>We propose a protocol that learns the Hamiltonian to precision $epsilon$ with total evolution time.
arXiv Detail & Related papers (2024-10-24T17:16:19Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Hamiltonian Property Testing [0.8192907805418583]
Locality is a fundamental feature of many physical time evolutions.<n>No protocols to rigorously test whether an unknown Hamiltonian is local were known.
arXiv Detail & Related papers (2024-03-05T13:44:28Z) - A hybrid quantum algorithm to detect conical intersections [39.58317527488534]
We show that for real molecular Hamiltonians, the Berry phase can be obtained by tracing a local optimum of a variational ansatz along the chosen path.
We demonstrate numerically the application of our algorithm on small toy models of the formaldimine molecule.
arXiv Detail & Related papers (2023-04-12T18:00:01Z) - Online Learning with Adversaries: A Differential-Inclusion Analysis [52.43460995467893]
We introduce an observation-matrix-based framework for fully asynchronous online Federated Learning with adversaries.
Our main result is that the proposed algorithm almost surely converges to the desired mean $mu.$
We derive this convergence using a novel differential-inclusion-based two-timescale analysis.
arXiv Detail & Related papers (2023-04-04T04:32:29Z) - On the complexity of implementing Trotter steps [2.1369834525800138]
We develop methods to perform faster Trotter steps with complexity sublinear in number of terms.
We also realize faster Trotter steps when certain blocks of Hamiltonian coefficients have low rank.
Our result suggests the use of Hamiltonian structural properties as both necessary and sufficient to implement Trotter synthesis steps with lower gate complexity.
arXiv Detail & Related papers (2022-11-16T19:00:01Z) - Learning many-body Hamiltonians with Heisenberg-limited scaling [3.460138063155115]
We propose the first algorithm to achieve the Heisenberg limit for learning interacting $N$-qubit local Hamiltonian.
After a total evolution time of $mathcalO(epsilon-1)$, the proposed algorithm can efficiently estimate any parameter in the $N$-qubit Hamiltonian to $epsilon$-error with high probability.
arXiv Detail & Related papers (2022-10-06T16:30:51Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - Hardness of Learning Halfspaces with Massart Noise [56.98280399449707]
We study the complexity of PAC learning halfspaces in the presence of Massart (bounded) noise.
We show that there an exponential gap between the information-theoretically optimal error and the best error that can be achieved by a SQ algorithm.
arXiv Detail & Related papers (2020-12-17T16:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.