Computational Advantage in Hybrid Quantum Neural Networks: Myth or Reality?
- URL: http://arxiv.org/abs/2412.04991v3
- Date: Fri, 21 Feb 2025 06:52:22 GMT
- Title: Computational Advantage in Hybrid Quantum Neural Networks: Myth or Reality?
- Authors: Muhammad Kashif, Alberto Marchisio, Muhammad Shafique,
- Abstract summary: Hybrid Quantum Neural Networks (HQNNs) have gained attention for their potential to enhance computational performance.<n>Do quantum layers offer computational advantages over purely classical models?<n>This paper explores how classical and hybrid models adapt their architectural complexity to increasing problem complexity.
- Score: 4.635820333232683
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hybrid Quantum Neural Networks (HQNNs) have gained attention for their potential to enhance computational performance by incorporating quantum layers into classical neural network (NN) architectures. However, a key question remains: Do quantum layers offer computational advantages over purely classical models? This paper explores how classical and hybrid models adapt their architectural complexity to increasing problem complexity. Using a multiclass classification problem, we benchmark classical models to identify optimal configurations for accuracy and efficiency, establishing a baseline for comparison. HQNNs, simulated on classical hardware (as common in the Noisy Intermediate-Scale Quantum (NISQ) era), are evaluated for their scaling of floating-point operations (FLOPs) and parameter growth. Our findings reveal that as problem complexity increases, HQNNs exhibit more efficient scaling of architectural complexity and computational resources. For example, from 10 to 110 features, HQNNs show an 53.1% increase in FLOPs compared to 88.1% for classical models, despite simulation overheads. Additionally, the parameter growth rate is slower in HQNNs (81.4%) than in classical models (88.5%). These results highlight HQNNs' scalability and resource efficiency, positioning them as a promising alternative for solving complex computational problems.
Related papers
- Quantum LEGO Learning: A Modular Design Principle for Hybrid Artificial Intelligence [63.39968536637762]
We introduce Quantum LEGO Learning, a learning framework that treats classical and quantum components as reusable, composable learning blocks.<n>Within this framework, a pre-trained classical neural network serves as a frozen feature block, while a VQC acts as a trainable adaptive module.<n>We develop a block-wise generalization theory that decomposes learning error into approximation and estimation components.
arXiv Detail & Related papers (2026-01-29T14:29:21Z) - FAQNAS: FLOPs-aware Hybrid Quantum Neural Architecture Search using Genetic Algorithm [2.1702673021505245]
Hybrid Quantum Neural Networks (HQNNs) are emerging as promising models in the noisy intermediate-scale quantum (NISQ) era.<n>We introduce FAQNAS, a FLOPs-aware neural architecture search (NAS) framework that formulates HQNN design as a multi-objective optimization problem balancing accuracy and FLOPs.<n>Our results establish FLOPs-awareness as a practical criterion for HQNN design in the NISQ era and as a scalable principle for future HQNN systems.
arXiv Detail & Related papers (2025-11-13T08:04:17Z) - Hybrid Quantum-Classical Neural Networks for Few-Shot Credit Risk Assessment [52.05742536403784]
This work tackles the challenge of few-shot credit risk assessment.<n>We design and implement a novel hybrid quantum-classical workflow.<n>A Quantum Neural Network (QNN) was trained via the parameter-shift rule.<n>On a real-world credit dataset of 279 samples, our QNN achieved a robust average AUC of 0.852 +/- 0.027 in simulations and yielded an impressive AUC of 0.88 in the hardware experiment.
arXiv Detail & Related papers (2025-09-17T08:36:05Z) - RhoDARTS: Differentiable Quantum Architecture Search with Density Matrix Simulations [48.670876200492415]
Variational Quantum Algorithms (VQAs) are a promising approach for leveraging powerful Noisy Intermediate-Scale Quantum (NISQ) computers.<n>We propose $rho$DARTS, a differentiable Quantum Architecture Search (QAS) algorithm that models the search process as the evolution of a quantum mixed state.
arXiv Detail & Related papers (2025-06-04T08:30:35Z) - Efficient Quantum Convolutional Neural Networks for Image Classification: Overcoming Hardware Constraints [2.3895835682351287]
Quantum convolutional neural networks (CNNs) hold potential to outperform classical approaches.<n>We introduce an encoding scheme that significantly reduces the input dimensionality.<n>We validate our experiments on IBM's Heron r2 quantum processor, achieving $96.08%$ classification accuracy.
arXiv Detail & Related papers (2025-05-09T11:09:52Z) - TunnElQNN: A Hybrid Quantum-classical Neural Network for Efficient Learning [0.0]
We develop TunnElQNN, a non-sequential architecture composed of alternating classical and quantum layers.<n>We evaluate the performance of this hybrid model on a synthetic dataset of interleaving half-circle for multi-class classification tasks.<n>Our results show that the TunnElQNN model consistently outperforms the ReLUQNN counterpart.
arXiv Detail & Related papers (2025-05-02T00:30:50Z) - HQViT: Hybrid Quantum Vision Transformer for Image Classification [48.72766405978677]
We propose a Hybrid Quantum Vision Transformer (HQViT) to accelerate model training while enhancing model performance.
HQViT introduces whole-image processing with amplitude encoding to better preserve global image information without additional positional encoding.
Experiments across various computer vision datasets demonstrate that HQViT outperforms existing models, achieving a maximum improvement of up to $10.9%$ (on the MNIST 10-classification task) over the state of the art.
arXiv Detail & Related papers (2025-04-03T16:13:34Z) - Lean classical-quantum hybrid neural network model for image classification [12.353900068459446]
We introduce a Lean Classical-Quantum Hybrid Neural Network (LCQHNN), which achieves classiffcation performance with only four layers of variational circuits.
We apply the LCQHNN to image classiffcation tasks on public datasets and achieve a classiffcation accuracy of 99.02%.
arXiv Detail & Related papers (2024-12-03T00:37:11Z) - Quantum convolutional neural networks for jet images classification [0.0]
This paper addresses the performance of quantum machine learning in the context of high-energy physics.
We use a quantum convolutional neural network (QCNN) for this task and compare its performance with CNN.
Our results indicate that QCNN with proper setups tend to perform better than their CNN counterparts.
arXiv Detail & Related papers (2024-08-16T12:28:10Z) - A Quantum Leaky Integrate-and-Fire Spiking Neuron and Network [0.0]
We introduce a new software model for quantum neuromorphic computing.
We use these neurons as building blocks in the construction of a quantum spiking neural network (QSNN) and a quantum spiking convolutional neural network (QSCNN)
arXiv Detail & Related papers (2024-07-23T11:38:06Z) - Studying the Impact of Quantum-Specific Hyperparameters on Hybrid Quantum-Classical Neural Networks [4.951980887762045]
hybrid quantum-classical neural networks (HQNNs) represent a promising solution that combines the strengths of classical machine learning with quantum computing capabilities.
In this paper, we investigate the impact of these variations on different HQNN models for image classification tasks, implemented on the PennyLane framework.
We aim to uncover intuitive and counter-intuitive learning patterns of HQNN models within granular levels of controlled quantum perturbations, to form a sound basis for their correlation to accuracy and training time.
arXiv Detail & Related papers (2024-02-16T11:44:25Z) - Bridging Classical and Quantum Machine Learning: Knowledge Transfer From
Classical to Quantum Neural Networks Using Knowledge Distillation [0.0]
This paper introduces a new method to transfer knowledge from classical to quantum neural networks using knowledge distillation.
We adapt classical convolutional neural network (CNN) architectures like LeNet and AlexNet to serve as teacher networks.
Quantum models achieve an average accuracy improvement of 0.80% on the MNIST dataset and 5.40% on the more complex Fashion MNIST dataset.
arXiv Detail & Related papers (2023-11-23T05:06:43Z) - Variational Quantum Neural Networks (VQNNS) in Image Classification [0.0]
This paper investigates how training of quantum neural network (QNNs) can be done using quantum optimization algorithms.
In this paper, a QNN structure is made where a variational parameterized circuit is incorporated as an input layer named as Variational Quantum Neural Network (VQNNs)
VQNNs is experimented with MNIST digit recognition (less complex) and crack image classification datasets which converge the computation in lesser time than QNN with decent training accuracy.
arXiv Detail & Related papers (2023-03-10T11:24:32Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - Optimizing Tensor Network Contraction Using Reinforcement Learning [86.05566365115729]
We propose a Reinforcement Learning (RL) approach combined with Graph Neural Networks (GNN) to address the contraction ordering problem.
The problem is extremely challenging due to the huge search space, the heavy-tailed reward distribution, and the challenging credit assignment.
We show how a carefully implemented RL-agent that uses a GNN as the basic policy construct can address these challenges.
arXiv Detail & Related papers (2022-04-18T21:45:13Z) - The dilemma of quantum neural networks [63.82713636522488]
We show that quantum neural networks (QNNs) fail to provide any benefit over classical learning models.
QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets.
These results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
arXiv Detail & Related papers (2021-06-09T10:41:47Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.