Lean classical-quantum hybrid neural network model for image classification
- URL: http://arxiv.org/abs/2412.02059v2
- Date: Mon, 06 Jan 2025 08:38:22 GMT
- Title: Lean classical-quantum hybrid neural network model for image classification
- Authors: Ao Liu, Cuihong Wen, Jieci Wang,
- Abstract summary: We introduce a Lean Classical-Quantum Hybrid Neural Network (LCQHNN), which achieves classiffcation performance with only four layers of variational circuits.<n>We apply the LCQHNN to image classiffcation tasks on public datasets and achieve a classiffcation accuracy of 99.02%.
- Score: 12.353900068459446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The integration of algorithms from quantum information with neural networks has enabled unprecedented advancements in various domains. Nonetheless, the application of quantum machine learning algorithms for image classiffcation predominantly relies on traditional architectures such as variational quantum circuits. The performance of these models is closely tied to the scale of their parameters, with the substantial demand for parameters potentially leading to limitations in computational resources and a signiffcant increase in computation time. In this paper, we introduce a Lean Classical-Quantum Hybrid Neural Network (LCQHNN), which achieves efffcient classiffcation performance with only four layers of variational circuits, thereby substantially reducing computational costs. We apply the LCQHNN to image classiffcation tasks on public datasets and achieve a classiffcation accuracy of 99.02% on the dataset, marking a 5.07% improvement over traditional deep learning methods. Under the same parameter conditions, this method shows a 75% and 70.59% improvement in training convergence speed on two datasets. Furthermore, through visualization studies, it is found that the model effectively captures key data features during training and establishes a clear association between these features and their corresponding categories. This study conffrms that the employment of quantum algorithms enhances the model's ability to handle complex classiffcation problems.
Related papers
- Regression and Classification with Single-Qubit Quantum Neural Networks [0.0]
We use a resource-efficient and scalable Single-Qubit Quantum Neural Network (SQQNN) for both regression and classification tasks.
For classification, we introduce a novel training method inspired by the Taylor series, which can efficiently find a global minimum in a single step.
The SQQNN exhibits virtually error-free and strong performance in regression and classification tasks, including the MNIST dataset.
arXiv Detail & Related papers (2024-12-12T17:35:36Z) - Parallel Proportional Fusion of Spiking Quantum Neural Network for Optimizing Image Classification [10.069224006497162]
We introduce a novel architecture termed Parallel Proportional Fusion of Quantum and Spiking Neural Networks (PPF-QSNN)
The proposed PPF-QSNN outperforms both the existing spiking neural network and the serial quantum neural network across metrics such as accuracy, loss, and robustness.
This study lays the groundwork for the advancement and application of quantum advantage in artificial intelligent computations.
arXiv Detail & Related papers (2024-04-01T10:35:35Z) - The role of data embedding in equivariant quantum convolutional neural
networks [2.255961793913651]
We investigate the role of classical-to-quantum embedding on the performance of equivariant quantum neural networks (EQNNs)
We numerically compare the classification accuracy of EQCNNs with three different basis-permuted amplitude embeddings to the one obtained from a non-equivariant quantum convolutional neural network (QCNN)
arXiv Detail & Related papers (2023-12-20T18:25:15Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - Weight Re-Mapping for Variational Quantum Algorithms [54.854986762287126]
We introduce the concept of weight re-mapping for variational quantum circuits (VQCs)
We employ seven distinct weight re-mapping functions to assess their impact on eight classification datasets.
Our results indicate that weight re-mapping can enhance the convergence speed of the VQC.
arXiv Detail & Related papers (2023-06-09T09:42:21Z) - Pooling techniques in hybrid quantum-classical convolutional neural
networks [0.0]
In-depth study of pooling techniques in hybrid quantum-classical convolutional neural networks (QCCNNs) for classifying 2D medical images is performed.
We find similar or better performance in comparison to an equivalent classical model and QCCNN without pooling.
It is promising to study architectural choices in QCCNNs in more depth for future applications.
arXiv Detail & Related papers (2023-05-09T16:51:46Z) - Quantum machine learning for image classification [39.58317527488534]
This research introduces two quantum machine learning models that leverage the principles of quantum mechanics for effective computations.
Our first model, a hybrid quantum neural network with parallel quantum circuits, enables the execution of computations even in the noisy intermediate-scale quantum era.
A second model introduces a hybrid quantum neural network with a Quanvolutional layer, reducing image resolution via a convolution process.
arXiv Detail & Related papers (2023-04-18T18:23:20Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Multiclass classification using quantum convolutional neural networks
with hybrid quantum-classical learning [0.5999777817331318]
We propose a quantum machine learning approach based on quantum convolutional neural networks for solving multiclass classification problems.
We use the proposed approach to demonstrate the 4-class classification for the case of the MNIST dataset using eight qubits for data encoding and four acnilla qubits.
Our results demonstrate comparable accuracy of our solution with classical convolutional neural networks with comparable numbers of trainable parameters.
arXiv Detail & Related papers (2022-03-29T09:07:18Z) - Quantum-inspired Complex Convolutional Neural Networks [17.65730040410185]
We improve the quantum-inspired neurons by exploiting the complex-valued weights which have richer representational capacity and better non-linearity.
We draw the models of quantum-inspired convolutional neural networks (QICNNs) capable of processing high-dimensional data.
The performance of classification accuracy of the five QICNNs are tested on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2021-10-31T03:10:48Z) - On Circuit-based Hybrid Quantum Neural Networks for Remote Sensing
Imagery Classification [88.31717434938338]
The hybrid QCNNs enrich the classical architecture of CNNs by introducing a quantum layer within a standard neural network.
The novel QCNN proposed in this work is applied to the Land Use and Land Cover (LULC) classification, chosen as an Earth Observation (EO) use case.
The results of the multiclass classification prove the effectiveness of the presented approach, by demonstrating that the QCNN performances are higher than the classical counterparts.
arXiv Detail & Related papers (2021-09-20T12:41:50Z) - Compact representations of convolutional neural networks via weight
pruning and quantization [63.417651529192014]
We propose a novel storage format for convolutional neural networks (CNNs) based on source coding and leveraging both weight pruning and quantization.
We achieve a reduction of space occupancy up to 0.6% on fully connected layers and 5.44% on the whole network, while performing at least as competitive as the baseline.
arXiv Detail & Related papers (2021-08-28T20:39:54Z) - Comparing concepts of quantum and classical neural network models for
image classification task [0.456877715768796]
This material includes the results of experiments on training and performance of a hybrid quantum-classical neural network.
Although its simulation is time-consuming, the quantum network, although its simulation is time-consuming, overcomes the classical network.
arXiv Detail & Related papers (2021-08-19T18:49:30Z) - The dilemma of quantum neural networks [63.82713636522488]
We show that quantum neural networks (QNNs) fail to provide any benefit over classical learning models.
QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets.
These results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
arXiv Detail & Related papers (2021-06-09T10:41:47Z) - Once Quantization-Aware Training: High Performance Extremely Low-bit
Architecture Search [112.05977301976613]
We propose to combine Network Architecture Search methods with quantization to enjoy the merits of the two sides.
We first propose the joint training of architecture and quantization with a shared step size to acquire a large number of quantized models.
Then a bit-inheritance scheme is introduced to transfer the quantized models to the lower bit, which further reduces the time cost and improves the quantization accuracy.
arXiv Detail & Related papers (2020-10-09T03:52:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.