LSTM-QGAN: Scalable NISQ Generative Adversarial Network
- URL: http://arxiv.org/abs/2409.02212v1
- Date: Tue, 3 Sep 2024 18:27:15 GMT
- Title: LSTM-QGAN: Scalable NISQ Generative Adversarial Network
- Authors: Cheng Chu, Aishwarya Hastak, Fan Chen,
- Abstract summary: Current quantum generative adversarial networks (QGANs) still struggle with practical-sized data.
We propose LSTM-QGAN, a QGAN architecture that eliminates preprocessing and integrates quantum long short-term memory (QLSTM) to ensure scalable performance.
Our experiments show that LSTM-QGAN significantly enhances both performance and scalability over state-of-the-art QGAN models.
- Score: 3.596166341956192
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current quantum generative adversarial networks (QGANs) still struggle with practical-sized data. First, many QGANs use principal component analysis (PCA) for dimension reduction, which, as our studies reveal, can diminish the QGAN's effectiveness. Second, methods that segment inputs into smaller patches processed by multiple generators face scalability issues. In this work, we propose LSTM-QGAN, a QGAN architecture that eliminates PCA preprocessing and integrates quantum long short-term memory (QLSTM) to ensure scalable performance. Our experiments show that LSTM-QGAN significantly enhances both performance and scalability over state-of-the-art QGAN models, with visual data improvements, reduced Frechet Inception Distance scores, and reductions of 5x in qubit counts, 5x in single-qubit gates, and 12x in two-qubit gates.
Related papers
- Q-S5: Towards Quantized State Space Models [41.94295877935867]
State Space Models (SSMs) have emerged as a potent alternative to transformers.
This paper investigates the effect of quantization on the S5 model to understand its impact on model performance.
arXiv Detail & Related papers (2024-06-13T09:53:24Z) - EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion Models [21.17675493267517]
Post-training quantization (PTQ) and quantization-aware training (QAT) are two main approaches to compress and accelerate diffusion models.
We introduce a data-free and parameter-efficient fine-tuning framework for low-bit diffusion models, dubbed EfficientDM, to achieve QAT-level performance with PTQ-like efficiency.
Our method significantly outperforms previous PTQ-based diffusion models while maintaining similar time and data efficiency.
arXiv Detail & Related papers (2023-10-05T02:51:53Z) - Single entanglement connection architecture between multi-layer bipartite Hardware Efficient Ansatz [18.876952671920133]
We propose a single entanglement connection architecture (SECA) for a bipartite hardware efficient ansatz.
Our results indicate the superiority of SECA over the common full entanglement connection architecture (FECA) in terms of computational performance.
arXiv Detail & Related papers (2023-07-23T13:36:30Z) - ISAAQ: Ising Machine Assisted Quantum Compiler [3.8137985834223502]
We propose ISing mAchine Assisted Quantum compiler (ISAAQ) to perform qubit routing with Ising machines.
ISAAQ accurately estimates the compilation costs by updating itself using previous compilation results.
ISAAQ exploits a cost-reduction method that implements commutative logical Controlled-NOT (CNOT) gates with fewer physical CNOT gates.
arXiv Detail & Related papers (2023-03-06T01:47:10Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - Compact representations of convolutional neural networks via weight
pruning and quantization [63.417651529192014]
We propose a novel storage format for convolutional neural networks (CNNs) based on source coding and leveraging both weight pruning and quantization.
We achieve a reduction of space occupancy up to 0.6% on fully connected layers and 5.44% on the whole network, while performing at least as competitive as the baseline.
arXiv Detail & Related papers (2021-08-28T20:39:54Z) - 4-bit Quantization of LSTM-based Speech Recognition Models [40.614677908909705]
We investigate the impact of aggressive low-precision representations of weights and activations in two families of large LSTM-based architectures for Automatic Speech Recognition.
We show that minimal accuracy loss is achievable with an appropriate choice of quantizers and initializations.
arXiv Detail & Related papers (2021-08-27T00:59:52Z) - Fully Quantized Image Super-Resolution Networks [81.75002888152159]
We propose a Fully Quantized image Super-Resolution framework (FQSR) to jointly optimize efficiency and accuracy.
We apply our quantization scheme on multiple mainstream super-resolution architectures, including SRResNet, SRGAN and EDSR.
Our FQSR using low bits quantization can achieve on par performance compared with the full-precision counterparts on five benchmark datasets.
arXiv Detail & Related papers (2020-11-29T03:53:49Z) - PAMS: Quantized Super-Resolution via Parameterized Max Scale [84.55675222525608]
Deep convolutional neural networks (DCNNs) have shown dominant performance in the task of super-resolution (SR)
We propose a new quantization scheme termed PArameterized Max Scale (PAMS), which applies the trainable truncated parameter to explore the upper bound of the quantization range adaptively.
Experiments demonstrate that the proposed PAMS scheme can well compress and accelerate the existing SR models such as EDSR and RDN.
arXiv Detail & Related papers (2020-11-09T06:16:05Z) - Toward fast and accurate human pose estimation via soft-gated skip
connections [97.06882200076096]
This paper is on highly accurate and highly efficient human pose estimation.
We re-analyze this design choice in the context of improving both the accuracy and the efficiency over the state-of-the-art.
Our model achieves state-of-the-art results on the MPII and LSP datasets.
arXiv Detail & Related papers (2020-02-25T18:51:51Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.