Toward Large-Scale Distributed Quantum Long Short-Term Memory with Modular Quantum Computers
- URL: http://arxiv.org/abs/2503.14088v1
- Date: Tue, 18 Mar 2025 10:07:34 GMT
- Title: Toward Large-Scale Distributed Quantum Long Short-Term Memory with Modular Quantum Computers
- Authors: Kuan-Cheng Chen, Samuel Yen-Chi Chen, Chen-Yu Liu, Kin K. Leung,
- Abstract summary: We introduce a Distributed Quantum Long Short-Term Memory (QLSTM) framework to address scalability challenges on Noisy Intermediate-Scale Quantum (NISQ) devices.<n>QLSTM captures long-range temporal dependencies, while a distributed architecture partitions the underlying Variational Quantum Circuits into smaller, manageable subcircuits.<n>We demonstrate that the distributed QLSTM achieves stable convergence and improved training dynamics compared to classical approaches.
- Score: 5.673361333697935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we introduce a Distributed Quantum Long Short-Term Memory (QLSTM) framework that leverages modular quantum computing to address scalability challenges on Noisy Intermediate-Scale Quantum (NISQ) devices. By embedding variational quantum circuits into LSTM cells, the QLSTM captures long-range temporal dependencies, while a distributed architecture partitions the underlying Variational Quantum Circuits (VQCs) into smaller, manageable subcircuits that can be executed on a network of quantum processing units. We assess the proposed framework using nontrivial benchmark problems such as damped harmonic oscillators and Nonlinear Autoregressive Moving Average sequences. Our results demonstrate that the distributed QLSTM achieves stable convergence and improved training dynamics compared to classical approaches. This work underscores the potential of modular, distributed quantum computing architectures for large-scale sequence modelling, providing a foundation for the future integration of hybrid quantum-classical solutions into advanced Quantum High-performance computing (HPC) ecosystems.
Related papers
- Quantum Adaptive Self-Attention for Quantum Transformer Models [0.0]
We propose Quantum Adaptive Self-Attention (QASA), a novel hybrid architecture that enhances classical Transformer models with a quantum attention mechanism.
QASA replaces dot-product attention with a parameterized quantum circuit (PQC) that adaptively captures inter-token relationships in the quantum Hilbert space.
Experiments on synthetic time-series tasks demonstrate that QASA achieves faster convergence and superior generalization compared to both standard Transformers and reduced classical variants.
arXiv Detail & Related papers (2025-04-05T02:52:37Z) - HQCC: A Hybrid Quantum-Classical Classifier with Adaptive Structure [7.836610894905161]
We propose a Hybrid Quantum-Classical (HQCC) to advance Quantum Machine Learning (QML)
HQCC adaptively optimize the Quantum Circuits (PQCs) through a Long ShortTerm Memory (LSTM) driven dynamic circuit generator.
We run simulations on the MNIST and Fashion MNIST datasets, achieving up to 97.12% accuracy.
arXiv Detail & Related papers (2025-04-02T22:49:00Z) - Training Hybrid Deep Quantum Neural Network for Reinforced Learning Efficiently [2.7812018782449073]
We present a scalable quantum machine learning architecture that overcomes challenges with efficient backpropagation.<n>Our method highlights that hDQNNs could exhibit potentially improved generalizability compared to purely classical models.
arXiv Detail & Related papers (2025-03-12T07:12:02Z) - Quantum Kernel-Based Long Short-term Memory for Climate Time-Series Forecasting [0.24739484546803336]
We present the Quantum Kernel-Based Long short-memory (QK-LSTM) network, which integrates quantum kernel methods into classical LSTM architectures.<n>QK-LSTM captures intricate nonlinear dependencies and temporal dynamics with fewer trainable parameters.
arXiv Detail & Related papers (2024-12-12T01:16:52Z) - Programming Variational Quantum Circuits with Quantum-Train Agent [3.360429911727189]
The Quantum-Train Quantum Fast Weight Programmer (QT-QFWP) framework is proposed, which facilitates the efficient and scalable programming of variational quantum circuits (VQCs)<n>This approach offers a significant advantage over conventional hybrid quantum-classical models by optimizing both quantum and classical parameter management.<n> QT-QFWP outperforms related models in both efficiency and predictive accuracy, providing a pathway toward more practical and cost-effective quantum machine learning applications.
arXiv Detail & Related papers (2024-12-02T06:26:09Z) - Quantum Kernel-Based Long Short-term Memory [0.30723404270319693]
We introduce the Quantum Kernel-Based Long Short-Term Memory (QK-LSTM) network to capture complex, non-linear patterns in sequential data.
This quantum-enhanced architecture demonstrates efficient convergence, robust loss minimization, and model compactness.
Benchmark comparisons reveal that QK-LSTM achieves performance on par with classical LSTM models, yet with fewer parameters.
arXiv Detail & Related papers (2024-11-20T11:39:30Z) - Parallel Quantum Computing Simulations via Quantum Accelerator Platform Virtualization [44.99833362998488]
We present a model for parallelizing simulation of quantum circuit executions.
The model can take advantage of its backend-agnostic features, enabling parallel quantum circuit execution over any target backend.
arXiv Detail & Related papers (2024-06-05T17:16:07Z) - A Quantum-Classical Collaborative Training Architecture Based on Quantum
State Fidelity [50.387179833629254]
We introduce a collaborative classical-quantum architecture called co-TenQu.
Co-TenQu enhances a classical deep neural network by up to 41.72% in a fair setting.
It outperforms other quantum-based methods by up to 1.9 times and achieves similar accuracy while utilizing 70.59% fewer qubits.
arXiv Detail & Related papers (2024-02-23T14:09:41Z) - A self-consistent field approach for the variational quantum
eigensolver: orbital optimization goes adaptive [52.77024349608834]
We present a self consistent field approach (SCF) within the Adaptive Derivative-Assembled Problem-Assembled Ansatz Variational Eigensolver (ADAPTVQE)
This framework is used for efficient quantum simulations of chemical systems on nearterm quantum computers.
arXiv Detail & Related papers (2022-12-21T23:15:17Z) - Quantum Federated Learning with Entanglement Controlled Circuits and
Superposition Coding [44.89303833148191]
We develop a depth-controllable architecture of entangled slimmable quantum neural networks (eSQNNs)
We propose an entangled slimmable QFL (eSQFL) that communicates the superposition-coded parameters of eS-QNNs.
In an image classification task, extensive simulations corroborate the effectiveness of eSQFL.
arXiv Detail & Related papers (2022-12-04T03:18:03Z) - Synergy Between Quantum Circuits and Tensor Networks: Short-cutting the
Race to Practical Quantum Advantage [43.3054117987806]
We introduce a scalable procedure for harnessing classical computing resources to provide pre-optimized initializations for quantum circuits.
We show this method significantly improves the trainability and performance of PQCs on a variety of problems.
By demonstrating a means of boosting limited quantum resources using classical computers, our approach illustrates the promise of this synergy between quantum and quantum-inspired models in quantum computing.
arXiv Detail & Related papers (2022-08-29T15:24:03Z) - Tensor Network Quantum Virtual Machine for Simulating Quantum Circuits
at Exascale [57.84751206630535]
We present a modernized version of the Quantum Virtual Machine (TNQVM) which serves as a quantum circuit simulation backend in the e-scale ACCelerator (XACC) framework.
The new version is based on the general purpose, scalable network processing library, ExaTN, and provides multiple quantum circuit simulators.
By combining the portable XACC quantum processors and the scalable ExaTN backend we introduce an end-to-end virtual development environment which can scale from laptops to future exascale platforms.
arXiv Detail & Related papers (2021-04-21T13:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.