Density Matrix Emulation of Quantum Recurrent Neural Networks for Multivariate Time Series Prediction
- URL: http://arxiv.org/abs/2310.20671v2
- Date: Thu, 30 Jan 2025 17:44:06 GMT
- Title: Density Matrix Emulation of Quantum Recurrent Neural Networks for Multivariate Time Series Prediction
- Authors: José Daniel Viqueira, Daniel Faílde, Mariamo M. Juane, Andrés Gómez, David Mera,
- Abstract summary: Emulation arises as the main near-term alternative to explore the potential of QRNNs.
We show how the present and past information from a time series is transmitted through the circuit.
We derive the analytical gradient and the Hessian of the network outputs with respect to its trainable parameters.
- Score: 3.1690235522182104
- License:
- Abstract: Quantum Recurrent Neural Networks (QRNNs) are robust candidates for modelling and predicting future values in multivariate time series. However, the effective implementation of some QRNN models is limited by the need for mid-circuit measurements. Those increase the requirements for quantum hardware, which in the current NISQ era does not allow reliable computations. Emulation arises as the main near-term alternative to explore the potential of QRNNs, but existing quantum emulators are not dedicated to circuits with multiple intermediate measurements. In this context, we design a specific emulation method that relies on density matrix formalism. Using a compact tensor notation, we provide the mathematical formulation of the operator-sum representation involved. This allows us to show how the present and past information from a time series is transmitted through the circuit, and how to reduce the computational cost in every time step of the emulated network. In addition, we derive the analytical gradient and the Hessian of the network outputs with respect to its trainable parameters, which are needed when the outputs have stochastic noise due to hardware errors and a finite number of circuit shots (sampling). We finally test the presented methods using a hardware-efficient ansatz and four diverse datasets that include univariate and multivariate time series, with and without sampling noise. In addition, we compare the model with other existing quantum and classical approaches. Our results show how QRNNs can be trained with numerical and analytical gradients to make accurate predictions of future values by capturing non-trivial patterns of input series with different complexities.
Related papers
- MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation [48.41289705783405]
We propose a PDE-embedded network with multiscale time stepping (MultiPDENet)
In particular, we design a convolutional filter based on the structure of finite difference with a small number of parameters to optimize.
A Physics Block with a 4th-order Runge-Kutta integrator at the fine time scale is established that embeds the structure of PDEs to guide the prediction.
arXiv Detail & Related papers (2025-01-27T12:15:51Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - A Quantum Optical Recurrent Neural Network for Online Processing of
Quantum Times Series [0.7087237546722617]
We show that a quantum optical recurrent neural network (QORNN) can enhance the transmission rate of quantum channels.
We also show that our model can counteract similar memory effects if they are unwanted.
We run a small-scale version of this last task on the photonic processor Borealis.
arXiv Detail & Related papers (2023-05-31T19:19:25Z) - Importance sampling for stochastic quantum simulations [68.8204255655161]
We introduce the qDrift protocol, which builds random product formulas by sampling from the Hamiltonian according to the coefficients.
We show that the simulation cost can be reduced while achieving the same accuracy, by considering the individual simulation cost during the sampling stage.
Results are confirmed by numerical simulations performed on a lattice nuclear effective field theory.
arXiv Detail & Related papers (2022-12-12T15:06:32Z) - Quantum Quantile Mechanics: Solving Stochastic Differential Equations
for Generating Time-Series [19.830330492689978]
We propose a quantum algorithm for sampling from a solution of differential equations (SDEs)
We represent the quantile function for an underlying probability distribution and extract samples as expectation values.
We test the method by simulating the Ornstein-Uhlenbeck process and sampling at times different from the initial point.
arXiv Detail & Related papers (2021-08-06T16:14:24Z) - Learning temporal data with variational quantum recurrent neural network [0.5658123802733283]
We propose a method for learning temporal data using a parametrized quantum circuit.
This work provides a way to exploit complex quantum dynamics for learning temporal data.
arXiv Detail & Related papers (2020-12-21T10:47:28Z) - Random Sampling Neural Network for Quantum Many-Body Problems [0.0]
We propose a general numerical method, Random Sampling Neural Networks (RSNN), to utilize the pattern recognition technique for the random sampling matrix elements of an interacting many-body system via a self-supervised learning approach.
Several exactly solvable 1D models, including Ising model with transverse field, Fermi-Hubbard model, and spin-$1/2$ $XXZ$ model, are used to test the applicability of RSNN.
arXiv Detail & Related papers (2020-11-10T15:52:44Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Recurrent Quantum Neural Networks [7.6146285961466]
Recurrent neural networks are the foundation of many sequence-to-sequence models in machine learning.
We construct a quantum recurrent neural network (QRNN) with demonstrable performance on non-trivial tasks.
We evaluate the QRNN on MNIST classification, both by feeding the QRNN each image pixel-by-pixel; and by utilising modern data augmentation as preprocessing step.
arXiv Detail & Related papers (2020-06-25T17:59:44Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.