Improved algorithms for learning quantum Hamiltonians, via flat polynomials
- URL: http://arxiv.org/abs/2407.04540v1
- Date: Fri, 5 Jul 2024 14:25:22 GMT
- Title: Improved algorithms for learning quantum Hamiltonians, via flat polynomials
- Authors: Shyam Narayanan,
- Abstract summary: We give an improved algorithm for learning a quantum Hamiltonian given copies of its Gibbs state, that can succeed at any temperature.
Specifically, we improve over the work of Bakshi, Liu, Moitra, and Tang [BLMT24], by reducing the complexity and dependence to singly exponential in the sample.
- Score: 7.693388437377614
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We give an improved algorithm for learning a quantum Hamiltonian given copies of its Gibbs state, that can succeed at any temperature. Specifically, we improve over the work of Bakshi, Liu, Moitra, and Tang [BLMT24], by reducing the sample complexity and runtime dependence to singly exponential in the inverse-temperature parameter, as opposed to doubly exponential. Our main technical contribution is a new flat polynomial approximation to the exponential function, with significantly lower degree than the flat polynomial approximation used in [BLMT24].
Related papers
- Learning quantum Hamiltonians at any temperature in polynomial time with
Chebyshev and bit complexity [3.2634122554914]
We consider the problem of learning local quantum Hamiltonians given copies of their state at a known inverse temperature.
Our main technical contribution is a new flat approximation of the exponential function based on the Chebyshev expansion.
arXiv Detail & Related papers (2024-02-08T10:42:47Z) - Incremental Quasi-Newton Methods with Faster Superlinear Convergence
Rates [50.36933471975506]
We consider the finite-sum optimization problem, where each component function is strongly convex and has Lipschitz continuous gradient and Hessian.
The recently proposed incremental quasi-Newton method is based on BFGS update and achieves a local superlinear convergence rate.
This paper proposes a more efficient quasi-Newton method by incorporating the symmetric rank-1 update into the incremental framework.
arXiv Detail & Related papers (2024-02-04T05:54:51Z) - Exponentiation of Parametric Hamiltonians via Unitary interpolation [0.8399688944263842]
We introduce two ideas for the time-efficient approximation of matrix exponentials of linear multi-parametric Hamiltonians.
We modify the Suzuki-Trotter product formula from an approximation to an compilation scheme to improve both accuracy and computational time.
arXiv Detail & Related papers (2024-02-02T15:29:55Z) - An integral algorithm of exponential observables for interacting fermions in quantum Monte Carlo simulation [7.826326818086168]
Exponential observables, formulated as $log langle ehatXrangle$ where $hatX$ is an extensive quantity, play a critical role in study of quantum many-body systems.
We propose a comprehensive algorithm for quantifying these observables in interacting fermion systems.
arXiv Detail & Related papers (2023-11-06T19:00:04Z) - Learning quantum Hamiltonians at any temperature in polynomial time [24.4681191608826]
We study the problem of learning a local quantum Hamiltonian $H$ given copies of its Gibbs state $rhobeta H/textrmtr(ebeta H)$ at a known inverse temperature.
An algorithm is developed to learn a Hamiltonian on $n$ qubits to precision $epsilon$ with onlyly many copies of the Gibbs state, but which takes exponential time.
arXiv Detail & Related papers (2023-10-03T17:50:26Z) - Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs [93.82811501035569]
We introduce a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization.
MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena.
We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150x compression.
arXiv Detail & Related papers (2023-09-29T20:18:52Z) - An Exponential Separation Between Quantum Query Complexity and the
Polynomial Degree [79.43134049617873]
In this paper, we demonstrate an exponential separation between exact degree and approximate quantum query for a partial function.
For an alphabet size, we have a constant versus separation complexity.
arXiv Detail & Related papers (2023-01-22T22:08:28Z) - Non-perturbative analytical diagonalization of Hamiltonians with
application to coupling suppression and enhancement in cQED [0.0]
Deriving effective Hamiltonian models plays an essential role in quantum theory.
We present two symbolic methods for computing effective Hamiltonian models.
We study the ZZ and cross-resonance interactions of superconducting qubits systems.
arXiv Detail & Related papers (2021-11-30T19:01:44Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Computational Barriers to Estimation from Low-Degree Polynomials [81.67886161671379]
We study the power of low-degrees for the task of detecting the presence of hidden structures.
For a large class of "signal plus noise" problems, we give a user-friendly lower bound for the best possible mean squared error achievable by any degree.
As applications, we give a tight characterization of the low-degree minimum mean squared error for the planted submatrix and planted dense subgraph problems.
arXiv Detail & Related papers (2020-08-05T17:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.