Hybrid Quantum-Classical Model for Image Classification
- URL: http://arxiv.org/abs/2509.13353v1
- Date: Sun, 14 Sep 2025 09:55:00 GMT
- Title: Hybrid Quantum-Classical Model for Image Classification
- Authors: Muhammad Adnan Shahzad,
- Abstract summary: This study presents a systematic comparison between hybrid quantum-classical neural networks and purely classical models across three benchmark datasets.<n>The hybrid models integrate parameterized quantum circuits with classical deep learning architectures, while the classical counterparts use conventional convolutional neural networks (CNNs)<n>Experiments were conducted over 50 training epochs for each dataset, with evaluations on validation accuracy, test accuracy, training time, computational resource usage, and adversarial robustness.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study presents a systematic comparison between hybrid quantum-classical neural networks and purely classical models across three benchmark datasets (MNIST, CIFAR100, and STL10) to evaluate their performance, efficiency, and robustness. The hybrid models integrate parameterized quantum circuits with classical deep learning architectures, while the classical counterparts use conventional convolutional neural networks (CNNs). Experiments were conducted over 50 training epochs for each dataset, with evaluations on validation accuracy, test accuracy, training time, computational resource usage, and adversarial robustness (tested with $\epsilon=0.1$ perturbations).Key findings demonstrate that hybrid models consistently outperform classical models in final accuracy, achieving {99.38\% (MNIST), 41.69\% (CIFAR100), and 74.05\% (STL10) validation accuracy, compared to classical benchmarks of 98.21\%, 32.25\%, and 63.76\%, respectively. Notably, the hybrid advantage scales with dataset complexity, showing the most significant gains on CIFAR100 (+9.44\%) and STL10 (+10.29\%). Hybrid models also train 5--12$\times$ faster (e.g., 21.23s vs. 108.44s per epoch on MNIST) and use 6--32\% fewer parameters} while maintaining superior generalization to unseen test data.Adversarial robustness tests reveal that hybrid models are significantly more resilient on simpler datasets (e.g., 45.27\% robust accuracy on MNIST vs. 10.80\% for classical) but show comparable fragility on complex datasets like CIFAR100 ($\sim$1\% robustness for both). Resource efficiency analyses indicate that hybrid models consume less memory (4--5GB vs. 5--6GB for classical) and lower CPU utilization (9.5\% vs. 23.2\% on average).These results suggest that hybrid quantum-classical architectures offer compelling advantages in accuracy, training efficiency, and parameter scalability, particularly for complex vision tasks.
Related papers
- Hybrid Quantum-Classical Ensemble Learning for S\&P 500 Directional Prediction [0.2538209532048867]
We introduce a hybrid ensemble framework combining quantum sentiment analysis, Decision Transformer architecture, and strategic model selection.<n>We achieve 60.14% directional accuracy on S&P 500 prediction, a 3.10% improvement over individual models.
arXiv Detail & Related papers (2025-12-06T22:22:09Z) - A Hybrid Neural Network with Smart Skip Connections for High-Precision, Low-Latency EMG-Based Hand Gesture Recognition [0.2356141385409842]
This paper presents a new hybrid neural network named ConSGruNet for precise and efficient hand gesture recognition.<n>The proposed model boasts an accuracy of 99.7% in classifying 53 classes in just 25 milliseconds.
arXiv Detail & Related papers (2025-03-12T04:01:32Z) - Malware Classification from Memory Dumps Using Machine Learning, Transformers, and Large Language Models [1.038088229789127]
This study investigates the performance of various classification models for a malware classification task using different feature sets and data configurations.<n>XGB achieved the highest accuracy of 87.42% using the Top 45 Features, outperforming all other models.<n>Deep learning models underperformed, with RNN achieving 66.71% accuracy and Transformers reaching 71.59%.
arXiv Detail & Related papers (2025-03-04T00:24:21Z) - Computational Advantage in Hybrid Quantum Neural Networks: Myth or Reality? [4.635820333232683]
Hybrid Quantum Neural Networks (HQNNs) have gained attention for their potential to enhance computational performance.<n>Do quantum layers offer computational advantages over purely classical models?<n>This paper explores how classical and hybrid models adapt their architectural complexity to increasing problem complexity.
arXiv Detail & Related papers (2024-12-06T12:31:04Z) - Lean classical-quantum hybrid neural network model for image classification [12.353900068459446]
We introduce a Lean Classical-Quantum Hybrid Neural Network (LCQHNN), which achieves efficient classification performance with only four layers of variational circuits.<n>Our experiments demonstrate that LCQHNN achieves 100%, 99.02%, and 85.55% classification accuracy on MNIST, FashionMNIST, and CIFAR-10 datasets.
arXiv Detail & Related papers (2024-12-03T00:37:11Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Scaling & Shifting Your Features: A New Baseline for Efficient Model
Tuning [126.84770886628833]
Existing finetuning methods either tune all parameters of the pretrained model (full finetuning) or only tune the last linear layer (linear probing)
We propose a new parameter-efficient finetuning method termed as SSF, representing that researchers only need to Scale and Shift the deep Features extracted by a pre-trained model to catch up with the performance full finetuning.
arXiv Detail & Related papers (2022-10-17T08:14:49Z) - Hyperparameter-free Continuous Learning for Domain Classification in
Natural Language Understanding [60.226644697970116]
Domain classification is the fundamental task in natural language understanding (NLU)
Most existing continual learning approaches suffer from low accuracy and performance fluctuation.
We propose a hyper parameter-free continual learning model for text data that can stably produce high performance under various environments.
arXiv Detail & Related papers (2022-01-05T02:46:16Z) - Conformer-based Hybrid ASR System for Switchboard Dataset [99.88988282353206]
We present and evaluate a competitive conformer-based hybrid model training recipe.
We study different training aspects and methods to improve word-error-rate as well as to increase training speed.
We conduct experiments on Switchboard 300h dataset and our conformer-based hybrid model achieves competitive results.
arXiv Detail & Related papers (2021-11-05T12:03:18Z) - Hybrid Quantum-Classical Neural Network for Incident Detection [2.5583276647402693]
The efficiency and reliability of real-time incident detection models directly impact the affected corridors' traffic safety and operational conditions.
Recent emergence of cloud-based quantum computing infrastructure and innovations in noisy intermediate-scale quantum devices have revealed a new era of quantum-enhanced algorithms.
A hybrid machine learning model, which includes classical and quantum machine learning (ML) models, is developed to identify incidents using the connected vehicle (CV) data.
arXiv Detail & Related papers (2021-08-02T19:08:31Z) - Towards a Competitive End-to-End Speech Recognition for CHiME-6 Dinner
Party Transcription [73.66530509749305]
In this paper, we argue that, even in difficult cases, some end-to-end approaches show performance close to the hybrid baseline.
We experimentally compare and analyze CTC-Attention versus RNN-Transducer approaches along with RNN versus Transformer architectures.
Our best end-to-end model based on RNN-Transducer, together with improved beam search, reaches quality by only 3.8% WER abs. worse than the LF-MMI TDNN-F CHiME-6 Challenge baseline.
arXiv Detail & Related papers (2020-04-22T19:08:33Z) - Highly Efficient Salient Object Detection with 100K Parameters [137.74898755102387]
We propose a flexible convolutional module, namely generalized OctConv (gOctConv), to efficiently utilize both in-stage and cross-stages multi-scale features.
We build an extremely light-weighted model, namely CSNet, which achieves comparable performance with about 0.2% (100k) of large models on popular object detection benchmarks.
arXiv Detail & Related papers (2020-03-12T07:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.