Improved machine learning algorithm for predicting ground state
properties
- URL: http://arxiv.org/abs/2301.13169v1
- Date: Mon, 30 Jan 2023 18:40:07 GMT
- Title: Improved machine learning algorithm for predicting ground state
properties
- Authors: Laura Lewis, Hsin-Yuan Huang, Viet T. Tran, Sebastian Lehner, Richard
Kueng, John Preskill
- Abstract summary: We give a classical machine learning (ML) algorithm for predicting ground state properties with an inductive bias encoding geometric locality.
The proposed ML model can efficiently predict ground state properties of an $n$-qubit gapped local Hamiltonian after learning from only $mathcalO(log(n))$ data.
- Score: 3.156207648146739
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Finding the ground state of a quantum many-body system is a fundamental
problem in quantum physics. In this work, we give a classical machine learning
(ML) algorithm for predicting ground state properties with an inductive bias
encoding geometric locality. The proposed ML model can efficiently predict
ground state properties of an $n$-qubit gapped local Hamiltonian after learning
from only $\mathcal{O}(\log(n))$ data about other Hamiltonians in the same
quantum phase of matter. This improves substantially upon previous results that
require $\mathcal{O}(n^c)$ data for a large constant $c$. Furthermore, the
training and prediction time of the proposed ML model scale as $\mathcal{O}(n
\log n)$ in the number of qubits $n$. Numerical experiments on physical systems
with up to 45 qubits confirm the favorable scaling in predicting ground state
properties using a small training dataset.
Related papers
- Predicting Ground State Properties: Constant Sample Complexity and Deep Learning Algorithms [48.869199703062606]
A fundamental problem in quantum many-body physics is that of finding ground states of local Hamiltonians.
We introduce two approaches that achieve a constant sample complexity, independent of system size $n$, for learning ground state properties.
arXiv Detail & Related papers (2024-05-28T18:00:32Z) - Exponentially improved efficient machine learning for quantum many-body states with provable guarantees [0.0]
We provide theoretical guarantees for efficient learning of quantum many-body states and their properties, with model-independent applications.
Our results provide theoretical guarantees for efficient learning of quantum many-body states and their properties, with model-independent applications.
arXiv Detail & Related papers (2023-04-10T02:22:36Z) - Learning to predict arbitrary quantum processes [7.69390398476646]
We present an efficient machine learning (ML) algorithm for predicting any unknown quantum process over $n$ qubits.
Our results highlight the potential for ML models to predict the output of complex quantum dynamics much faster than the time needed to run the process itself.
arXiv Detail & Related papers (2022-10-26T17:52:59Z) - Neural network enhanced measurement efficiency for molecular
groundstates [63.36515347329037]
We adapt common neural network models to learn complex groundstate wavefunctions for several molecular qubit Hamiltonians.
We find that using a neural network model provides a robust improvement over using single-copy measurement outcomes alone to reconstruct observables.
arXiv Detail & Related papers (2022-06-30T17:45:05Z) - Average-case Speedup for Product Formulas [69.68937033275746]
Product formulas, or Trotterization, are the oldest and still remain an appealing method to simulate quantum systems.
We prove that the Trotter error exhibits a qualitatively better scaling for the vast majority of input states.
Our results open doors to the study of quantum algorithms in the average case.
arXiv Detail & Related papers (2021-11-09T18:49:48Z) - Hamiltonian simulation with random inputs [74.82351543483588]
Theory of average-case performance of Hamiltonian simulation with random initial states.
Numerical evidence suggests that this theory accurately characterizes the average error for concrete models.
arXiv Detail & Related papers (2021-11-08T19:08:42Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Information-theoretic bounds on quantum advantage in machine learning [6.488575826304023]
We study the performance of classical and quantum machine learning (ML) models in predicting outcomes of physical experiments.
For any input distribution $mathcalD(x)$, a classical ML model can provide accurate predictions on average by accessing $mathcalE$ a number of times comparable to the optimal quantum ML model.
arXiv Detail & Related papers (2021-01-07T10:10:09Z) - Quantum Algorithms for Simulating the Lattice Schwinger Model [63.18141027763459]
We give scalable, explicit digital quantum algorithms to simulate the lattice Schwinger model in both NISQ and fault-tolerant settings.
In lattice units, we find a Schwinger model on $N/2$ physical sites with coupling constant $x-1/2$ and electric field cutoff $x-1/2Lambda$.
We estimate observables which we cost in both the NISQ and fault-tolerant settings by assuming a simple target observable---the mean pair density.
arXiv Detail & Related papers (2020-02-25T19:18:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.