Critical Evaluation of Quantum Machine Learning for Adversarial Robustness
- URL: http://arxiv.org/abs/2511.14989v2
- Date: Tue, 25 Nov 2025 18:00:58 GMT
- Title: Critical Evaluation of Quantum Machine Learning for Adversarial Robustness
- Authors: Saeefa Rubaiyet Nowmi, Jesus Lopez, Md Mahmudul Alam Imon, Shahrooz Pouryousef, Mohammad Saidur Rahman,
- Abstract summary: We present a systematization of adversarial robustness in Quantum Machine Learning (QML)<n>We implement representative attacks in three threat models-black-box, gray-box, and white-box.<n>Our findings guide the development of secure and resilient QML architectures for practical deployment.
- Score: 1.274988274746616
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Quantum Machine Learning (QML) integrates quantum computational principles into learning algorithms, offering improved representational capacity and computational efficiency. Nevertheless, the security and robustness of QML systems remain underexplored, especially under adversarial conditions. In this paper, we present a systematization of adversarial robustness in QML, integrating conceptual organization with empirical evaluation across three threat models-black-box, gray-box, and white-box. We implement representative attacks in each category, including label-flipping for black-box, QUID encoder-level data poisoning for gray-box, and FGSM and PGD for white-box, using Quantum Neural Networks (QNNs) trained on two datasets from distinct domains: MNIST from computer vision and AZ-Class from Android malware, across multiple circuit depths (2, 5, 10, and 50 layers) and two encoding schemes (angle and amplitude). Our evaluation shows that amplitude encoding yields the highest clean accuracy (93% on MNIST and 67% on AZ-Class) in deep, noiseless circuits; however, it degrades sharply under adversarial perturbations and depolarization noise (p=0.01), dropping accuracy below 5%. In contrast, angle encoding, while offering lower representational capacity, remains more stable in shallow, noisy regimes, revealing a trade-off between capacity and robustness. Moreover, the QUID attack attains higher attack success rates, though quantum noise channels disrupt the Hilbert-space correlations it exploits, weakening its impact in image domains. This suggests that noise can act as a natural defense mechanism in Noisy Intermediate-Scale Quantum (NISQ) systems. Overall, our findings guide the development of secure and resilient QML architectures for practical deployment. These insights underscore the importance of designing threat-aware models that remain reliable under real-world noise in NISQ settings.
Related papers
- Continual Quantum Architecture Search with Tensor-Train Encoding: Theory and Applications to Signal Processing [68.35481158940401]
CL-QAS is a continual quantum architecture search framework.<n>It mitigates challenges of costly encoding amplitude and forgetting in variational quantum circuits.<n>It achieves controllable robustness expressivity, sample-efficient generalization, and smooth convergence without barren plateaus.
arXiv Detail & Related papers (2026-01-10T02:36:03Z) - Towards Quantum Enhanced Adversarial Robustness with Rydberg Reservoir Learning [45.92935470813908]
Quantum computing reservoir (QRC) leverages the high-dimensional, nonlinear dynamics inherent in quantum many-body systems.<n>Recent studies indicate that perturbation quantums based on variational circuits remain susceptible to adversarials.<n>We investigate the first systematic evaluation of adversarial robustness in a QR based learning model.
arXiv Detail & Related papers (2025-10-15T12:17:23Z) - Quantum Machine Learning for UAV Swarm Intrusion Detection [25.52804434998647]
Intrusion detection inUAV swarms is complicated by high mobility, non-stationary traffic, and severe class imbalance.<n>We benchmark three quantum-machine-learning (QML) approaches - quantum kernels, variational quantum neural networks (QNNs), and hybrid quantum-trained neural networks (QT-NNs)<n>All models consume an 8-feature flow representation and are evaluated under identical preprocessing, balancing, and noise-model assumptions.<n>Results reveal clear trade-offs: quantum kernels and QT-NNs excel in low-data, nonlinear regimes, while deeper QNNs suffer from trainability issues, and CNN
arXiv Detail & Related papers (2025-09-01T22:36:30Z) - TensorHyper-VQC: A Tensor-Train-Guided Hypernetwork for Robust and Scalable Variational Quantum Computing [50.95799256262098]
We introduceHyper-VQC, a novel tensor-train (TT)-guided hypernetwork framework for quantum machine learning.<n>Our framework delegates the generation of quantum circuit parameters to a classical TT network, effectively decoupling optimization from quantum hardware.<n>These results positionHyper-VQC as a scalable and noise-resilient framework for advancing practical quantum machine learning on near-term devices.
arXiv Detail & Related papers (2025-08-01T23:37:55Z) - Adversarial Threats in Quantum Machine Learning: A Survey of Attacks and Defenses [2.089191490381739]
Quantum Machine Learning (QML) integrates quantum computing with classical machine learning to solve classification, regression and generative tasks.<n>This chapter examines adversarial threats unique to QML systems, focusing on vulnerabilities in cloud-based deployments, hybrid architectures, and quantum generative models.
arXiv Detail & Related papers (2025-06-27T01:19:49Z) - VQC-MLPNet: An Unconventional Hybrid Quantum-Classical Architecture for Scalable and Robust Quantum Machine Learning [50.95799256262098]
Variational quantum circuits (VQCs) hold promise for quantum machine learning but face challenges in expressivity, trainability, and noise resilience.<n>We propose VQC-MLPNet, a hybrid architecture where a VQC generates the first-layer weights of a classical multilayer perceptron during training, while inference is performed entirely classically.
arXiv Detail & Related papers (2025-06-12T01:38:15Z) - Experimental robustness benchmark of quantum neural network on a superconducting quantum processor [14.38187281782993]
Quantum machine learning (QML) models, like their classical counterparts, are vulnerable to adversarial attacks, hindering their secure deployment.<n>Here, we report the first systematic experimental robustness benchmark for 20-qubit quantum neural network (QNN)<n>Our benchmarking framework features an efficient adversarial attack algorithm designed for QNNs, enabling quantitative characterization of adversarial robustness and robustness bounds.
arXiv Detail & Related papers (2025-05-22T14:18:14Z) - Fooling the Decoder: An Adversarial Attack on Quantum Error Correction [49.48516314472825]
In this work, we target a basic RL surface code decoder (DeepQ) to create the first adversarial attack on quantum error correction.<n>We demonstrate an attack that reduces the logical qubit lifetime in memory experiments by up to five orders of magnitude.<n>This attack highlights the susceptibility of machine learning-based QEC and underscores the importance of further research into robust QEC methods.
arXiv Detail & Related papers (2025-04-28T10:10:05Z) - Adversarial Data Poisoning Attacks on Quantum Machine Learning in the NISQ Era [2.348041867134616]
A key concern in the Quantum Machine Learning (QML) domain is the threat of data poisoning attacks in the current quantum cloud setting.<n>In this work, we first propose a simple yet effective technique to measure intra-class encoder state similarity (ESS) by analyzing the outputs of encoding circuits.<n>Through extensive experiments conducted in both noiseless and noisy environments, we introduce a underlineQuantum underlineIndiscriminate underlineData Poisoning attack, QUID.
arXiv Detail & Related papers (2024-11-21T18:46:45Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of
Neural Networks [3.2242513084255036]
QUANOS is a framework that performs layer-specific hybrid quantization based on Adversarial Noise Sensitivity (ANS)
Our experiments on CIFAR10, CIFAR100 datasets show that QUANOS outperforms homogenously quantized 8-bit precision baseline in terms of adversarial robustness.
arXiv Detail & Related papers (2020-04-22T15:56:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.