Deep recurrent networks predicting the gap evolution in adiabatic
quantum computing
- URL: http://arxiv.org/abs/2109.08492v5
- Date: Wed, 7 Jun 2023 10:21:13 GMT
- Title: Deep recurrent networks predicting the gap evolution in adiabatic
quantum computing
- Authors: Naeimeh Mohseni, Carlos Navarrete-Benlloch, Tim Byrnes, Florian
Marquardt
- Abstract summary: We explore the potential of deep learning for discovering a mapping from the parameters that fully identify a problem Hamiltonian to the parametric dependence of the gap.
We show that a long short-term memory network succeeds in predicting the gap when the parameter space scales linearly with system size.
Remarkably, we show that once this architecture is combined with a convolutional neural network to deal with the spatial structure of the model, the gap evolution can even be predicted for system sizes larger than the ones seen by the neural network during training.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In adiabatic quantum computing finding the dependence of the gap of the
Hamiltonian as a function of the parameter varied during the adiabatic sweep is
crucial in order to optimize the speed of the computation. Inspired by this
challenge, in this work, we explore the potential of deep learning for
discovering a mapping from the parameters that fully identify a problem
Hamiltonian to the aforementioned parametric dependence of the gap applying
different network architectures. Through this example, we conjecture that a
limiting factor for the learnability of such problems is the size of the input,
that is, how the number of parameters needed to identify the Hamiltonian scales
with the system size. We show that a long short-term memory network succeeds in
predicting the gap when the parameter space scales linearly with system size.
Remarkably, we show that once this architecture is combined with a
convolutional neural network to deal with the spatial structure of the model,
the gap evolution can even be predicted for system sizes larger than the ones
seen by the neural network during training. This provides a significant speedup
in comparison with the existing exact and approximate algorithms in calculating
the gap.
Related papers
- Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - On the growth of the parameters of approximating ReLU neural networks [0.542249320079018]
This work focuses on the analysis of fully connected feed forward ReLU neural networks as they approximate a given, smooth function.
In contrast to conventionally studied universal approximation properties under increasing architectures, we are concerned with the growth of the parameters of approximating networks.
arXiv Detail & Related papers (2024-06-21T07:45:28Z) - Deep Neural Networks as Variational Solutions for Correlated Open
Quantum Systems [0.0]
We show that parametrizing the density matrix directly with more powerful models can yield better variational ansatz functions.
We present results for the dissipative one-dimensional transverse-field Ising model and a two-dimensional dissipative Heisenberg model.
arXiv Detail & Related papers (2024-01-25T13:41:34Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Spike-and-slab shrinkage priors for structurally sparse Bayesian neural networks [0.16385815610837165]
Sparse deep learning addresses challenges by recovering a sparse representation of the underlying target function.
Deep neural architectures compressed via structured sparsity provide low latency inference, higher data throughput, and reduced energy consumption.
We propose structurally sparse Bayesian neural networks which prune excessive nodes with (i) Spike-and-Slab Group Lasso (SS-GL), and (ii) Spike-and-Slab Group Horseshoe (SS-GHS) priors.
arXiv Detail & Related papers (2023-08-17T17:14:18Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - Bayesian Interpolation with Deep Linear Networks [92.1721532941863]
Characterizing how neural network depth, width, and dataset size jointly impact model quality is a central problem in deep learning theory.
We show that linear networks make provably optimal predictions at infinite depth.
We also show that with data-agnostic priors, Bayesian model evidence in wide linear networks is maximized at infinite depth.
arXiv Detail & Related papers (2022-12-29T20:57:46Z) - Deep learning of spatial densities in inhomogeneous correlated quantum
systems [0.0]
We show that we can learn to predict densities using convolutional neural networks trained on random potentials.
We show that our approach can handle well the interplay of interference and interactions and the behaviour of models with phase transitions in inhomogeneous situations.
arXiv Detail & Related papers (2022-11-16T17:10:07Z) - Scalable Spatiotemporal Graph Neural Networks [14.415967477487692]
Graph neural networks (GNNs) are often the core component of the forecasting architecture.
In most pretemporal GNNs, the computational complexity scales up to a quadratic factor with the length of the sequence times the number of links in the graph.
We propose a scalable architecture that exploits an efficient encoding of both temporal and spatial dynamics.
arXiv Detail & Related papers (2022-09-14T09:47:38Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv Detail & Related papers (2020-06-19T21:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.