An Amplitude-Encoding-Based Classical-Quantum Transfer Learning framework: Outperforming Classical Methods in Image Recognition
- URL: http://arxiv.org/abs/2502.20184v1
- Date: Thu, 27 Feb 2025 15:20:01 GMT
- Title: An Amplitude-Encoding-Based Classical-Quantum Transfer Learning framework: Outperforming Classical Methods in Image Recognition
- Authors: Shouwei Hu, Xi Li, Banyao Ruan, Zhihao Liu,
- Abstract summary: This paper proposes an amplitude-encoding-based classical-quantum transfer learning (AE-CQTL) framework, accompanied by an effective learning algorithm.<n>Based on the AE-CQTL framework, we designed and implemented two CQTL neural network models: Transfer learning Quantum Neural Network (TLQNN) and Transfer Learning Quantum Convolutional Neural Network (TLQCNN)
- Score: 8.971481563534537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The classical-quantum transfer learning (CQTL) method is introduced to address the challenge of training large-scale, high-resolution image data on a limited number of qubits (ranging from tens to hundreds) in the current Noisy Intermediate-Scale quantum (NISQ) era. existing CQTL frameworks have been demonstrate quantum advantages with a small number of parameters (around 50), but the performance of quantum neural networks is sensitive to the number of parameters. Currently, there is a lack of exploration into larger-scale quantum circuits with more parameters. This paper proposes an amplitude-encoding-based classical-quantum transfer learning (AE-CQTL) framework, accompanied by an effective learning algorithm. The AE-CQTL framework multiplies the parameters of quantum circuits by using multi-layer ansatz. Based on the AE-CQTL framework, we designed and implemented two CQTL neural network models: Transfer learning Quantum Neural Network (TLQNN) and Transfer Learning Quantum Convolutional Neural Network (TLQCNN). Both models significantly expand the parameter capacity of quantum circuits, elevating the parameter scale from a few dozen to over one hundred parameters. In cross-experiments with three benchmark datasets (MNIST, Fashion-MNIST and CIFAR10) and three source models (ResNet18, ResNet50 and DenseNet121), TLQNN and TLQCNN have exceeded the benchmark classical classifier in multiple performance metrics, including accuracy, convergence, stability, and generalization capability. Our work contributes to advancing the application of classical-quantum transfer learning on larger-scale quantum devices in future.
Related papers
- Let the Quantum Creep In: Designing Quantum Neural Network Models by
Gradually Swapping Out Classical Components [1.024113475677323]
Modern AI systems are often built on neural networks.
We propose a framework where classical neural network layers are gradually replaced by quantum layers.
We conduct numerical experiments on image classification datasets to demonstrate the change of performance brought by the systematic introduction of quantum components.
arXiv Detail & Related papers (2024-09-26T07:01:29Z) - A Quantum-Classical Collaborative Training Architecture Based on Quantum
State Fidelity [50.387179833629254]
We introduce a collaborative classical-quantum architecture called co-TenQu.
Co-TenQu enhances a classical deep neural network by up to 41.72% in a fair setting.
It outperforms other quantum-based methods by up to 1.9 times and achieves similar accuracy while utilizing 70.59% fewer qubits.
arXiv Detail & Related papers (2024-02-23T14:09:41Z) - Studying the Impact of Quantum-Specific Hyperparameters on Hybrid Quantum-Classical Neural Networks [4.951980887762045]
hybrid quantum-classical neural networks (HQNNs) represent a promising solution that combines the strengths of classical machine learning with quantum computing capabilities.
In this paper, we investigate the impact of these variations on different HQNN models for image classification tasks, implemented on the PennyLane framework.
We aim to uncover intuitive and counter-intuitive learning patterns of HQNN models within granular levels of controlled quantum perturbations, to form a sound basis for their correlation to accuracy and training time.
arXiv Detail & Related papers (2024-02-16T11:44:25Z) - Bridging Classical and Quantum Machine Learning: Knowledge Transfer From
Classical to Quantum Neural Networks Using Knowledge Distillation [0.0]
This paper introduces a new method to transfer knowledge from classical to quantum neural networks using knowledge distillation.
We adapt classical convolutional neural network (CNN) architectures like LeNet and AlexNet to serve as teacher networks.
Quantum models achieve an average accuracy improvement of 0.80% on the MNIST dataset and 5.40% on the more complex Fashion MNIST dataset.
arXiv Detail & Related papers (2023-11-23T05:06:43Z) - Pre-training Tensor-Train Networks Facilitates Machine Learning with Variational Quantum Circuits [70.97518416003358]
Variational quantum circuits (VQCs) hold promise for quantum machine learning on noisy intermediate-scale quantum (NISQ) devices.
While tensor-train networks (TTNs) can enhance VQC representation and generalization, the resulting hybrid model, TTN-VQC, faces optimization challenges due to the Polyak-Lojasiewicz (PL) condition.
To mitigate this challenge, we introduce Pre+TTN-VQC, a pre-trained TTN model combined with a VQC.
arXiv Detail & Related papers (2023-05-18T03:08:18Z) - Quantum machine learning for image classification [39.58317527488534]
This research introduces two quantum machine learning models that leverage the principles of quantum mechanics for effective computations.
Our first model, a hybrid quantum neural network with parallel quantum circuits, enables the execution of computations even in the noisy intermediate-scale quantum era.
A second model introduces a hybrid quantum neural network with a Quanvolutional layer, reducing image resolution via a convolution process.
arXiv Detail & Related papers (2023-04-18T18:23:20Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Classical-to-quantum convolutional neural network transfer learning [1.9336815376402723]
Machine learning using quantum convolutional neural networks (QCNNs) has demonstrated success in both quantum and classical data classification.
We propose transfer learning as an effective strategy for utilizing small QCNNs in the noisy intermediate-scale quantum era.
arXiv Detail & Related papers (2022-08-31T09:15:37Z) - Multiclass classification using quantum convolutional neural networks
with hybrid quantum-classical learning [0.5999777817331318]
We propose a quantum machine learning approach based on quantum convolutional neural networks for solving multiclass classification problems.
We use the proposed approach to demonstrate the 4-class classification for the case of the MNIST dataset using eight qubits for data encoding and four acnilla qubits.
Our results demonstrate comparable accuracy of our solution with classical convolutional neural networks with comparable numbers of trainable parameters.
arXiv Detail & Related papers (2022-03-29T09:07:18Z) - QTN-VQC: An End-to-End Learning framework for Quantum Neural Networks [71.14713348443465]
We introduce a trainable quantum tensor network (QTN) for quantum embedding on a variational quantum circuit (VQC)
QTN enables an end-to-end parametric model pipeline, namely QTN-VQC, from the generation of quantum embedding to the output measurement.
Our experiments on the MNIST dataset demonstrate the advantages of QTN for quantum embedding over other quantum embedding approaches.
arXiv Detail & Related papers (2021-10-06T14:44:51Z) - Quantum convolutional neural network for classical data classification [0.8057006406834467]
We benchmark fully parameterized quantum convolutional neural networks (QCNNs) for classical data classification.
We propose a quantum neural network model inspired by CNN that only uses two-qubit interactions throughout the entire algorithm.
arXiv Detail & Related papers (2021-08-02T06:48:34Z) - Quantum Federated Learning with Quantum Data [87.49715898878858]
Quantum machine learning (QML) has emerged as a promising field that leans on the developments in quantum computing to explore large complex machine learning problems.
This paper proposes the first fully quantum federated learning framework that can operate over quantum data and, thus, share the learning of quantum circuit parameters in a decentralized manner.
arXiv Detail & Related papers (2021-05-30T12:19:27Z) - Entanglement Classification via Neural Network Quantum States [58.720142291102135]
In this paper we combine machine-learning tools and the theory of quantum entanglement to perform entanglement classification for multipartite qubit systems in pure states.
We use a parameterisation of quantum systems using artificial neural networks in a restricted Boltzmann machine (RBM) architecture, known as Neural Network Quantum States (NNS)
arXiv Detail & Related papers (2019-12-31T07:40:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.