Rapid training of quantum recurrent neural network
- URL: http://arxiv.org/abs/2207.00378v1
- Date: Fri, 1 Jul 2022 12:29:33 GMT
- Title: Rapid training of quantum recurrent neural network
- Authors: Micha{\l} Siemaszko, Thomas McDermott, Adam Buraczewski, Bertrand Le
Saux, Magdalena Stobi\'nska
- Abstract summary: We propose a Quantum Recurrent Neural Network (QRNN) to address these obstacles.
The design of the network is based on the continuous-variable quantum computing paradigm.
Our numerical simulations show that the QRNN converges to optimal weights in fewer epochs than the classical network.
- Score: 26.087244189340858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time series prediction is the crucial task for many human activities e.g.
weather forecasts or predicting stock prices. One solution to this problem is
to use Recurrent Neural Networks (RNNs). Although they can yield accurate
predictions, their learning process is slow and complex. Here we propose a
Quantum Recurrent Neural Network (QRNN) to address these obstacles. The design
of the network is based on the continuous-variable quantum computing paradigm.
We demonstrate that the network is capable of learning time dependence of a few
types of temporal data. Our numerical simulations show that the QRNN converges
to optimal weights in fewer epochs than the classical network. Furthermore, for
a small number of trainable parameters it can achieve lower loss than the
latter.
Related papers
- CTRQNets & LQNets: Continuous Time Recurrent and Liquid Quantum Neural Networks [76.53016529061821]
Liquid Quantum Neural Network (LQNet) and Continuous Time Recurrent Quantum Neural Network (CTRQNet) developed.
LQNet and CTRQNet achieve accuracy increases as high as 40% on CIFAR 10 through binary classification.
arXiv Detail & Related papers (2024-08-28T00:56:03Z) - Algebraic Representations for Faster Predictions in Convolutional Neural Networks [0.0]
Convolutional neural networks (CNNs) are a popular choice of model for tasks in computer vision.
skip connections may be added to create an easier gradient optimization problem.
We show that arbitrarily complex, trained, linear CNNs with skip connections can be simplified into a single-layer model.
arXiv Detail & Related papers (2024-08-14T21:10:05Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Quantum Recurrent Neural Networks for Sequential Learning [11.133759363113867]
We propose a new kind of quantum recurrent neural network (QRNN) to find quantum advantageous applications in the near term.
Our QRNN is built by stacking the QRBs in a staggered way that can greatly reduce the algorithm's requirement with regard to the coherent time of quantum devices.
The numerical experiments show that our QRNN achieves much better performance in prediction (classification) accuracy against the classical RNN and state-of-the-art QNN models for sequential learning.
arXiv Detail & Related papers (2023-02-07T04:04:39Z) - Supervised learning of random quantum circuits via scalable neural
networks [0.0]
Deep convolutional neural networks (CNNs) are trained to predict single-qubit and two-qubit expectation values.
The CNNs often outperform the quantum devices, depending on the circuit depth, on the network depth, and on the training set size.
arXiv Detail & Related papers (2022-06-21T13:05:52Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Tensor train decompositions on recurrent networks [60.334946204107446]
Matrix product state (MPS) tensor trains have more attractive features than MPOs, in terms of storage reduction and computing time at inference.
We show that MPS tensor trains should be at the forefront of LSTM network compression through a theoretical analysis and practical experiments on NLP task.
arXiv Detail & Related papers (2020-06-09T18:25:39Z) - Zero-shot and few-shot time series forecasting with ordinal regression
recurrent neural networks [17.844338213026976]
Recurrent neural networks (RNNs) are state-of-the-art in several sequential learning tasks, but they often require considerable amounts of data to generalise well.
We propose a novel RNN-based model that directly addresses this problem by learning a shared feature embedding over the space of many quantised time series.
We show how this enables our RNN framework to accurately and reliably forecast unseen time series, even when there is little to no training data available.
arXiv Detail & Related papers (2020-03-26T21:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.