Patch-Based End-to-End Quantum Learning Network for Reduction and Classification of Classical Data
- URL: http://arxiv.org/abs/2409.15214v1
- Date: Mon, 23 Sep 2024 16:58:02 GMT
- Title: Patch-Based End-to-End Quantum Learning Network for Reduction and Classification of Classical Data
- Authors: Jishnu Mahmud, Shaikh Anowarul Fattah,
- Abstract summary: In the noisy intermediate scale quantum (NISQ) era, the control over the qubits is limited due to errors caused by quantum decoherence, crosstalk, and imperfect calibration.
It is necessary to reduce the size of the large-scale classical data, such as images, when they are to be processed by quantum networks.
In this paper, a dynamic patch-based quantum domain data reduction network with a classical attention mechanism is proposed to avoid such data reductions.
- Score: 0.22099217573031676
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the noisy intermediate scale quantum (NISQ) era, the control over the qubits is limited due to the errors caused by quantum decoherence, crosstalk, and imperfect calibration. Hence, it is necessary to reduce the size of the large-scale classical data, such as images, when they are to be processed by quantum networks. Conventionally input classical data are reduced in the classical domain using classical networks such as autoencoders and, subsequently, analyzed in the quantum domain. These conventional techniques involve training an enormous number of parameters, making them computationally costly. In this paper, a dynamic patch-based quantum domain data reduction network with a classical attention mechanism is proposed to avoid such data reductions, and subsequently coupled with a novel quantum classifier to perform classification tasks. The architecture processes the classical data sequentially in patches and reduces them using a quantum convolutional-inspired reduction network and further enriches them using a self-attention technique, which utilizes a classical mask derived from simple statistical operations on the native classical data, after measurement. The reduced representation is passed through a quantum classifier, which re-encodes it into quantum states, processes them through quantum ansatzes, and finally measures them to predict classes. This training process involves a joint optimization scheme that considers both the reduction and classifier networks, making the reduction operation dynamic. The proposed architecture has been extensively tested on the publicly available Fashion MNIST dataset, and it has excellent classification performance using significantly reduced training parameters.
Related papers
- Enhancing the performance of Variational Quantum Classifiers with hybrid autoencoders [0.0]
We propose an alternative method which reduces the dimensionality of a given dataset by taking into account the specific quantum embedding that comes after.
This method aspires to make quantum machine learning with VQCs more versatile and effective on datasets of high dimension.
arXiv Detail & Related papers (2024-09-05T08:51:20Z) - Quantum Transfer Learning for MNIST Classification Using a Hybrid Quantum-Classical Approach [0.0]
This research explores the integration of quantum computing with classical machine learning for image classification tasks.
We propose a hybrid quantum-classical approach that leverages the strengths of both paradigms.
The experimental results indicate that while the hybrid model demonstrates the feasibility of integrating quantum computing with classical techniques, the accuracy of the final model, trained on quantum outcomes, is currently lower than the classical model trained on compressed features.
arXiv Detail & Related papers (2024-08-05T22:16:27Z) - Synergy between noisy quantum computers and scalable classical deep learning [0.4999814847776097]
We investigate the potential of combining the computational power of noisy quantum computers and classical scalable convolutional neural networks (CNNs)
The goal is to accurately predict exact expectation values of parameterized quantum circuits representing the Trotter-decomposed dynamics of quantum Ising models.
Thanks to the quantum information, our CNNs succeed even when supervised learning based only on classical descriptors fails.
arXiv Detail & Related papers (2024-04-11T14:47:18Z) - A Quantum-Classical Collaborative Training Architecture Based on Quantum
State Fidelity [50.387179833629254]
We introduce a collaborative classical-quantum architecture called co-TenQu.
Co-TenQu enhances a classical deep neural network by up to 41.72% in a fair setting.
It outperforms other quantum-based methods by up to 1.9 times and achieves similar accuracy while utilizing 70.59% fewer qubits.
arXiv Detail & Related papers (2024-02-23T14:09:41Z) - Bridging Classical and Quantum Machine Learning: Knowledge Transfer From
Classical to Quantum Neural Networks Using Knowledge Distillation [0.0]
This paper introduces a new method to transfer knowledge from classical to quantum neural networks using knowledge distillation.
We adapt classical convolutional neural network (CNN) architectures like LeNet and AlexNet to serve as teacher networks.
Quantum models achieve an average accuracy improvement of 0.80% on the MNIST dataset and 5.40% on the more complex Fashion MNIST dataset.
arXiv Detail & Related papers (2023-11-23T05:06:43Z) - Near-Term Distributed Quantum Computation using Mean-Field Corrections
and Auxiliary Qubits [77.04894470683776]
We propose near-term distributed quantum computing that involve limited information transfer and conservative entanglement production.
We build upon these concepts to produce an approximate circuit-cutting technique for the fragmented pre-training of variational quantum algorithms.
arXiv Detail & Related papers (2023-09-11T18:00:00Z) - Deep Quantum Error Correction [73.54643419792453]
Quantum error correction codes (QECC) are a key component for realizing the potential of quantum computing.
In this work, we efficiently train novel emphend-to-end deep quantum error decoders.
The proposed method demonstrates the power of neural decoders for QECC by achieving state-of-the-art accuracy.
arXiv Detail & Related papers (2023-01-27T08:16:26Z) - Quantum-Classical Hybrid Information Processing via a Single Quantum
System [1.1602089225841632]
Current technologies in quantum-based communications bring a new integration of quantum data with classical data for hybrid processing.
We propose a quantum reservoir processor to harness quantum dynamics in computational tasks requiring both classical and quantum inputs.
arXiv Detail & Related papers (2022-09-01T14:33:40Z) - Learning Representations for CSI Adaptive Quantization and Feedback [51.14360605938647]
We propose an efficient method for adaptive quantization and feedback in frequency division duplexing systems.
Existing works mainly focus on the implementation of autoencoder (AE) neural networks for CSI compression.
We recommend two different methods: one based on a post training quantization and the second one in which the codebook is found during the training of the AE.
arXiv Detail & Related papers (2022-07-13T08:52:13Z) - Cluster-Promoting Quantization with Bit-Drop for Minimizing Network
Quantization Loss [61.26793005355441]
Cluster-Promoting Quantization (CPQ) finds the optimal quantization grids for neural networks.
DropBits is a new bit-drop technique that revises the standard dropout regularization to randomly drop bits instead of neurons.
We experimentally validate our method on various benchmark datasets and network architectures.
arXiv Detail & Related papers (2021-09-05T15:15:07Z) - Quantum Deformed Neural Networks [83.71196337378022]
We develop a new quantum neural network layer designed to run efficiently on a quantum computer.
It can be simulated on a classical computer when restricted in the way it entangles input states.
arXiv Detail & Related papers (2020-10-21T09:46:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.