Hybrid Quantum Recurrent Neural Network For Remaining Useful Life Prediction
- URL: http://arxiv.org/abs/2504.20823v1
- Date: Tue, 29 Apr 2025 14:41:41 GMT
- Title: Hybrid Quantum Recurrent Neural Network For Remaining Useful Life Prediction
- Authors: Olga Tsurkan, Aleksandra Konstantinova, Aleksandr Sedykh, Dmitrii Zhiganov, Arsenii Senokosov, Daniil Tarpanov, Matvei Anoshin, Leonid Fedichkin,
- Abstract summary: We introduce a Hybrid Quantum Recurrent Neural Network framework, combining Quantum Long Short-Term Memory layers with classical dense layers for Remaining Useful Life forecasting.<n> Experimental results demonstrate that, despite having fewer trainable parameters, the Hybrid Quantum Recurrent Neural Network achieves up to a 5% improvement over a Recurrent Neural Network.
- Score: 67.410870290301
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predictive maintenance in aerospace heavily relies on accurate estimation of the remaining useful life of jet engines. In this paper, we introduce a Hybrid Quantum Recurrent Neural Network framework, combining Quantum Long Short-Term Memory layers with classical dense layers for Remaining Useful Life forecasting on NASA's Commercial Modular Aero-Propulsion System Simulation dataset. Each Quantum Long Short-Term Memory gate replaces conventional linear transformations with Quantum Depth-Infused circuits, allowing the network to learn high-frequency components more effectively. Experimental results demonstrate that, despite having fewer trainable parameters, the Hybrid Quantum Recurrent Neural Network achieves up to a 5% improvement over a Recurrent Neural Network based on stacked Long Short-Term Memory layers in terms of mean root mean squared error and mean absolute error. Moreover, a thorough comparison of our method with established techniques, including Random Forest, Convolutional Neural Network, and Multilayer Perceptron, demonstrates that our approach, which achieves a Root Mean Squared Error of 15.46, surpasses these baselines by approximately 13.68%, 16.21%, and 7.87%, respectively. Nevertheless, it remains outperformed by certain advanced joint architectures. Our findings highlight the potential of hybrid quantum-classical approaches for robust time-series forecasting under limited data conditions, offering new avenues for enhancing reliability in predictive maintenance tasks.
Related papers
- Toward Practical Quantum Machine Learning: A Novel Hybrid Quantum LSTM for Fraud Detection [0.1398098625978622]
We present a novel hybrid quantum-classical neural network architecture for fraud detection.
By leveraging quantum phenomena such as superposition and entanglement, our model enhances the feature representation of sequential transaction data.
Results demonstrate competitive improvements in accuracy, precision, recall, and F1 score relative to a conventional LSTM baseline.
arXiv Detail & Related papers (2025-04-30T19:09:12Z) - Hybrid Quantum Neural Networks with Amplitude Encoding: Advancing Recovery Rate Predictions [6.699192644249841]
Recovery rate prediction plays a pivotal role in bond investment strategies.<n>Forecasting faces challenges like high-dimensional features, small sample sizes, and overfitting.<n>We propose a hybrid Quantum Machine Learning model incorporating unitized Quantum Circuits (PQC) within a neural network framework.
arXiv Detail & Related papers (2025-01-27T07:27:23Z) - Quantum Kernel-Based Long Short-term Memory for Climate Time-Series Forecasting [0.24739484546803336]
We present the Quantum Kernel-Based Long short-memory (QK-LSTM) network, which integrates quantum kernel methods into classical LSTM architectures.
QK-LSTM captures intricate nonlinear dependencies and temporal dynamics with fewer trainable parameters.
arXiv Detail & Related papers (2024-12-12T01:16:52Z) - QIANets: Quantum-Integrated Adaptive Networks for Reduced Latency and Improved Inference Times in CNN Models [2.6663666678221376]
Convolutional neural networks (CNNs) have made significant advances in computer vision tasks, yet their high inference times and latency limit real-world applicability.
We introduce QIANets: a novel approach of redesigning the traditional GoogLeNet, DenseNet, and ResNet-18 model architectures to process more parameters and computations whilst maintaining low inference times.
Despite experimental limitations, the method was tested and evaluated, demonstrating reductions in inference times, along with effective accuracy preservations.
arXiv Detail & Related papers (2024-10-14T09:24:48Z) - Quantum-Train Long Short-Term Memory: Application on Flood Prediction Problem [0.8192907805418583]
This study applies the Quantum-Train (QT) technique to a forecasting Long Short-Term Memory (LSTM) model trained by Quantum Machine Learning (QML)
The QT technique, originally successful in the A Matter of Taste challenge at QHack 2024, leverages QML to reduce the number of trainable parameters to a polylogarithmic function of the number of parameters in a classical neural network (NN)
Our approach directly processes classical data without the need for quantum embedding and operates independently of quantum computing resources post-training.
arXiv Detail & Related papers (2024-07-11T15:56:00Z) - A Quantum-Classical Collaborative Training Architecture Based on Quantum
State Fidelity [50.387179833629254]
We introduce a collaborative classical-quantum architecture called co-TenQu.
Co-TenQu enhances a classical deep neural network by up to 41.72% in a fair setting.
It outperforms other quantum-based methods by up to 1.9 times and achieves similar accuracy while utilizing 70.59% fewer qubits.
arXiv Detail & Related papers (2024-02-23T14:09:41Z) - Scaling Limits of Quantum Repeater Networks [62.75241407271626]
Quantum networks (QNs) are a promising platform for secure communications, enhanced sensing, and efficient distributed quantum computing.
Due to the fragile nature of quantum states, these networks face significant challenges in terms of scalability.
In this paper, the scaling limits of quantum repeater networks (QRNs) are analyzed.
arXiv Detail & Related papers (2023-05-15T14:57:01Z) - Adaptive, Continuous Entanglement Generation for Quantum Networks [59.600944425468676]
Quantum networks rely on entanglement between qubits at distant nodes to transmit information.
We present an adaptive scheme that uses information from previous requests to better guide the choice of randomly generated quantum links.
We also explore quantum memory allocation scenarios, where a difference in latency performance implies the necessity of optimal allocation of resources for quantum networks.
arXiv Detail & Related papers (2022-12-17T05:40:09Z) - Layer Ensembles [95.42181254494287]
We introduce a method for uncertainty estimation that considers a set of independent categorical distributions for each layer of the network.
We show that the method can be further improved by ranking samples, resulting in models that require less memory and time to run.
arXiv Detail & Related papers (2022-10-10T17:52:47Z) - Cluster-Promoting Quantization with Bit-Drop for Minimizing Network
Quantization Loss [61.26793005355441]
Cluster-Promoting Quantization (CPQ) finds the optimal quantization grids for neural networks.
DropBits is a new bit-drop technique that revises the standard dropout regularization to randomly drop bits instead of neurons.
We experimentally validate our method on various benchmark datasets and network architectures.
arXiv Detail & Related papers (2021-09-05T15:15:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.