Transformer neural networks and quantum simulators: a hybrid approach for simulating strongly correlated systems
- URL: http://arxiv.org/abs/2406.00091v1
- Date: Fri, 31 May 2024 17:55:27 GMT
- Title: Transformer neural networks and quantum simulators: a hybrid approach for simulating strongly correlated systems
- Authors: Hannah Lange, Guillaume Bornet, Gabriel Emperauger, Cheng Chen, Thierry Lahaye, Stefan Kienle, Antoine Browaeys, Annabelle Bohrdt,
- Abstract summary: We present a hybrid optimization scheme for neural quantum states (NQS) that involves a data-driven pretraining with numerical or experimental data and a second, Hamiltonian-driven optimization stage.
Our work paves the way for a reliable and efficient optimization of neural quantum states.
- Score: 1.6494451064539348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Owing to their great expressivity and versatility, neural networks have gained attention for simulating large two-dimensional quantum many-body systems. However, their expressivity comes with the cost of a challenging optimization due to the in general rugged and complicated loss landscape. Here, we present a hybrid optimization scheme for neural quantum states (NQS) that involves a data-driven pretraining with numerical or experimental data and a second, Hamiltonian-driven optimization stage. By using both projective measurements from the computational basis as well as expectation values from other measurement configurations such as spin-spin correlations, our pretraining gives access to the sign structure of the state, yielding improved and faster convergence that is robust w.r.t. experimental imperfections and limited datasets. We apply the hybrid scheme to the ground state search for the 2D transverse field Ising model and the 2D dipolar XY model on $6\times 6$ and $10\times 10$ square lattices with a patched transformer wave function, using numerical and experimental data from a programmable Rydberg quantum simulator [Chen et al., Nature 616 (2023)], with snapshots of the quantum system obtained from the different measurement configurations, and show that the information from the second basis highly improves the performance. Our work paves the way for a reliable and efficient optimization of neural quantum states.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Deep Neural Networks as Variational Solutions for Correlated Open
Quantum Systems [0.0]
We show that parametrizing the density matrix directly with more powerful models can yield better variational ansatz functions.
We present results for the dissipative one-dimensional transverse-field Ising model and a two-dimensional dissipative Heisenberg model.
arXiv Detail & Related papers (2024-01-25T13:41:34Z) - Physics-informed Neural Networks for Encoding Dynamics in Real Physical
Systems [0.0]
This dissertation investigates physics-informed neural networks (PINNs) as candidate models for encoding governing equations.
We show that for the pendulum system the PINNs outperformed equivalent uninformed neural networks (NNs) in the ideal data case.
In similar test cases with real data collected from an experiment, PINNs outperformed NNs with 9.3x and 9.1x accuracy improvements for 67 linearly-spaced and uniformly-distributed random points respectively.
arXiv Detail & Related papers (2024-01-07T16:19:28Z) - Enhancing variational Monte Carlo using a programmable quantum simulator [0.3078264203938486]
We show that projective measurement data can be used to enhance in silico simulations of quantum matter.
We employ data-enhanced variational Monte Carlo to train powerful autoregressive wavefunction ans"atze based on recurrent neural networks.
Our work highlights the promise of hybrid quantum--classical approaches for large-scale simulation of quantum many-body systems.
arXiv Detail & Related papers (2023-08-04T18:08:49Z) - Quantum HyperNetworks: Training Binary Neural Networks in Quantum
Superposition [16.1356415877484]
We introduce quantum hypernetworks as a mechanism to train binary neural networks on quantum computers.
We show that our approach effectively finds optimal parameters, hyperparameters and architectural choices with high probability on classification problems.
Our unified approach provides an immense scope for other applications in the field of machine learning.
arXiv Detail & Related papers (2023-01-19T20:06:48Z) - Neural network enhanced measurement efficiency for molecular
groundstates [63.36515347329037]
We adapt common neural network models to learn complex groundstate wavefunctions for several molecular qubit Hamiltonians.
We find that using a neural network model provides a robust improvement over using single-copy measurement outcomes alone to reconstruct observables.
arXiv Detail & Related papers (2022-06-30T17:45:05Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Efficient 2D Tensor Network Simulation of Quantum Systems [6.074275058563179]
2D tensor networks such as Projected Entangled States (PEPS) are well-suited for key classes of physical systems and quantum circuits.
We propose new algorithms and software abstractions for PEPS-based methods, accelerating the bottleneck operation of contraction and scalableization of a subnetwork.
arXiv Detail & Related papers (2020-06-26T22:36:56Z) - Optimal Gradient Quantization Condition for Communication-Efficient
Distributed Training [99.42912552638168]
Communication of gradients is costly for training deep neural networks with multiple devices in computer vision applications.
In this work, we deduce the optimal condition of both the binary and multi-level gradient quantization for textbfANY gradient distribution.
Based on the optimal condition, we develop two novel quantization schemes: biased BinGrad and unbiased ORQ for binary and multi-level gradient quantization respectively.
arXiv Detail & Related papers (2020-02-25T18:28:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.