Improving Quantum Recurrent Neural Networks with Amplitude Encoding
- URL: http://arxiv.org/abs/2508.16784v1
- Date: Fri, 22 Aug 2025 20:31:40 GMT
- Title: Improving Quantum Recurrent Neural Networks with Amplitude Encoding
- Authors: Jack Morgan, Hamed Mohammadbagherpoor, Eric Ghysels,
- Abstract summary: The Quantum Recurrent Neural Network encodes temporal data into quantum states that are periodically input into a quantum circuit.<n>We evaluate and improve amplitude-based QRNNs using EnQode, a recently introduced method for approximate amplitude encoding.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum machine learning holds promise for advancing time series forecasting. The Quantum Recurrent Neural Network (QRNN), inspired by classical RNNs, encodes temporal data into quantum states that are periodically input into a quantum circuit. While prior QRNN work has predominantly used angle encoding, alternative encoding strategies like amplitude encoding remain underexplored due to their high computational complexity. In this paper, we evaluate and improve amplitude-based QRNNs using EnQode, a recently introduced method for approximate amplitude encoding. We propose a simple pre-processing technique that augments amplitude encoded inputs with their pre-normalized magnitudes, leading to improved generalization on two real world data sets. Additionally, we introduce a novel circuit architecture for the QRNN that is mathematically equivalent to the original model but achieves a substantial reduction in circuit depth. Together, these contributions demonstrate practical improvements to QRNN design in both model performance and quantum resource efficiency.
Related papers
- CTRQNets & LQNets: Continuous Time Recurrent and Liquid Quantum Neural Networks [76.53016529061821]
Liquid Quantum Neural Network (LQNet) and Continuous Time Recurrent Quantum Neural Network (CTRQNet) developed.<n>LQNet and CTRQNet achieve accuracy increases as high as 40% on CIFAR 10 through binary classification.
arXiv Detail & Related papers (2024-08-28T00:56:03Z) - Introducing Reduced-Width QNNs, an AI-inspired Ansatz Design Pattern [3.757262277494307]
Variational Quantum Algorithms are one of the most promising candidates to yield the first industrially relevant quantum advantage.
They are often referred to as Quantum Neural Networks (QNNs) when being used in analog settings as classical Artificial Neural Networks (ANNs)
We propose a reduced-width circuit ansatz design, which is motivated by recent results gained in the analysis of dropout regularization in QNNs.
arXiv Detail & Related papers (2023-06-08T08:58:43Z) - Graph Neural Network Autoencoders for Efficient Quantum Circuit
Optimisation [69.43216268165402]
We present for the first time how to use graph neural network (GNN) autoencoders for the optimisation of quantum circuits.
We construct directed acyclic graphs from the quantum circuits, encode the graphs and use the encodings to represent RL states.
Our method is the first realistic first step towards very large scale RL quantum circuit optimisation.
arXiv Detail & Related papers (2023-03-06T16:51:30Z) - Quantum Recurrent Neural Networks for Sequential Learning [11.133759363113867]
We propose a new kind of quantum recurrent neural network (QRNN) to find quantum advantageous applications in the near term.
Our QRNN is built by stacking the QRBs in a staggered way that can greatly reduce the algorithm's requirement with regard to the coherent time of quantum devices.
The numerical experiments show that our QRNN achieves much better performance in prediction (classification) accuracy against the classical RNN and state-of-the-art QNN models for sequential learning.
arXiv Detail & Related papers (2023-02-07T04:04:39Z) - Reservoir Computing via Quantum Recurrent Neural Networks [0.5999777817331317]
Existing VQC or QNN-based methods require significant computational resources to perform gradient-based optimization of quantum circuit parameters.
In this work, we approach sequential modeling by applying a reservoir computing (RC) framework to quantum recurrent neural networks (QRNN-RC)
Our numerical simulations show that the QRNN-RC can reach results comparable to fully trained QRNN models for several function approximation and time series tasks.
arXiv Detail & Related papers (2022-11-04T17:30:46Z) - Accelerating the training of single-layer binary neural networks using
the HHL quantum algorithm [58.720142291102135]
We show that useful information can be extracted from the quantum-mechanical implementation of Harrow-Hassidim-Lloyd (HHL)
This paper shows, however, that useful information can be extracted from the quantum-mechanical implementation of HHL, and used to reduce the complexity of finding the solution on the classical side.
arXiv Detail & Related papers (2022-10-23T11:58:05Z) - Supervised learning of random quantum circuits via scalable neural
networks [0.0]
Deep convolutional neural networks (CNNs) are trained to predict single-qubit and two-qubit expectation values.
The CNNs often outperform the quantum devices, depending on the circuit depth, on the network depth, and on the training set size.
arXiv Detail & Related papers (2022-06-21T13:05:52Z) - Quantum convolutional neural network for classical data classification [0.8057006406834467]
We benchmark fully parameterized quantum convolutional neural networks (QCNNs) for classical data classification.
We propose a quantum neural network model inspired by CNN that only uses two-qubit interactions throughout the entire algorithm.
arXiv Detail & Related papers (2021-08-02T06:48:34Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Branching Quantum Convolutional Neural Networks [0.0]
Small-scale quantum computers are already showing potential gains in learning tasks on large quantum and very large classical data sets.
We present a generalization of QCNN, the branching quantum convolutional neural network, or bQCNN, with substantially higher expressibility.
arXiv Detail & Related papers (2020-12-28T19:00:03Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - Decentralizing Feature Extraction with Quantum Convolutional Neural
Network for Automatic Speech Recognition [101.69873988328808]
We build upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction.
An input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram.
The corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters.
The encoded features are then down-streamed to the local RNN model for the final recognition.
arXiv Detail & Related papers (2020-10-26T03:36:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.