Adversarial Threats in Quantum Machine Learning: A Survey of Attacks and Defenses
- URL: http://arxiv.org/abs/2506.21842v1
- Date: Fri, 27 Jun 2025 01:19:49 GMT
- Title: Adversarial Threats in Quantum Machine Learning: A Survey of Attacks and Defenses
- Authors: Archisman Ghosh, Satwik Kundu, Swaroop Ghosh,
- Abstract summary: Quantum Machine Learning (QML) integrates quantum computing with classical machine learning to solve classification, regression and generative tasks.<n>This chapter examines adversarial threats unique to QML systems, focusing on vulnerabilities in cloud-based deployments, hybrid architectures, and quantum generative models.
- Score: 2.089191490381739
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Quantum Machine Learning (QML) integrates quantum computing with classical machine learning, primarily to solve classification, regression and generative tasks. However, its rapid development raises critical security challenges in the Noisy Intermediate-Scale Quantum (NISQ) era. This chapter examines adversarial threats unique to QML systems, focusing on vulnerabilities in cloud-based deployments, hybrid architectures, and quantum generative models. Key attack vectors include model stealing via transpilation or output extraction, data poisoning through quantum-specific perturbations, reverse engineering of proprietary variational quantum circuits, and backdoor attacks. Adversaries exploit noise-prone quantum hardware and insufficiently secured QML-as-a-Service (QMLaaS) workflows to compromise model integrity, ownership, and functionality. Defense mechanisms leverage quantum properties to counter these threats. Noise signatures from training hardware act as non-invasive watermarks, while hardware-aware obfuscation techniques and ensemble strategies disrupt cloning attempts. Emerging solutions also adapt classical adversarial training and differential privacy to quantum settings, addressing vulnerabilities in quantum neural networks and generative architectures. However, securing QML requires addressing open challenges such as balancing noise levels for reliability and security, mitigating cross-platform attacks, and developing quantum-classical trust frameworks. This chapter summarizes recent advances in attacks and defenses, offering a roadmap for researchers and practitioners to build robust, trustworthy QML systems resilient to evolving adversarial landscapes.
Related papers
- Entangled Threats: A Unified Kill Chain Model for Quantum Machine Learning Security [32.73124984242397]
Quantum Machine Learning (QML) systems inherit vulnerabilities from classical machine learning.<n>We present a detailed taxonomy of QML attack vectors mapped to corresponding stages in a quantum-aware kill chain framework.<n>This work provides a foundation for more realistic threat modeling and proactive security-in-depth design in the emerging field of quantum machine learning.
arXiv Detail & Related papers (2025-07-11T14:25:36Z) - VQC-MLPNet: An Unconventional Hybrid Quantum-Classical Architecture for Scalable and Robust Quantum Machine Learning [60.996803677584424]
Variational Quantum Circuits (VQCs) offer a novel pathway for quantum machine learning.<n>Their practical application is hindered by inherent limitations such as constrained linear expressivity, optimization challenges, and acute sensitivity to quantum hardware noise.<n>This work introduces VQC-MLPNet, a scalable and robust hybrid quantum-classical architecture designed to overcome these obstacles.
arXiv Detail & Related papers (2025-06-12T01:38:15Z) - Fooling the Decoder: An Adversarial Attack on Quantum Error Correction [49.48516314472825]
In this work, we target a basic RL surface code decoder (DeepQ) to create the first adversarial attack on quantum error correction.<n>We demonstrate an attack that reduces the logical qubit lifetime in memory experiments by up to five orders of magnitude.<n>This attack highlights the susceptibility of machine learning-based QEC and underscores the importance of further research into robust QEC methods.
arXiv Detail & Related papers (2025-04-28T10:10:05Z) - Quantum-driven Zero Trust Framework with Dynamic Anomaly Detection in 7G Technology: A Neural Network Approach [0.0]
We propose the Quantum Neural Network-Enhanced Zero Trust Framework (QNN-ZTF) for enhanced security.<n>We integrate Zero Trust Architecture, Intrusion Detection Systems, and Quantum Neural Networks (QNNs) for enhanced security.<n>We show improved cyber threat mitigation, demonstrating the framework's effectiveness in reducing false positives and response times.
arXiv Detail & Related papers (2025-02-11T18:59:32Z) - Practical hybrid PQC-QKD protocols with enhanced security and performance [44.8840598334124]
We develop hybrid protocols by which QKD and PQC inter-operate within a joint quantum-classical network.
In particular, we consider different hybrid designs that may offer enhanced speed and/or security over the individual performance of either approach.
arXiv Detail & Related papers (2024-11-02T00:02:01Z) - QML-IDS: Quantum Machine Learning Intrusion Detection System [1.2016264781280588]
We present QML-IDS, a novel Intrusion Detection System that combines quantum and classical computing techniques.
QML-IDS employs Quantum Machine Learning(QML) methodologies to analyze network patterns and detect attack activities.
We show that QML-IDS is effective at attack detection and performs well in binary and multiclass classification tasks.
arXiv Detail & Related papers (2024-10-07T13:07:41Z) - Security Concerns in Quantum Machine Learning as a Service [2.348041867134616]
Quantum machine learning (QML) is a category of algorithms that employ variational quantum circuits (VQCs) to tackle machine learning tasks.
Recent discoveries have shown that QML models can effectively generalize from limited training data samples.
QML represents a hybrid model that utilizes both classical and quantum computing resources.
arXiv Detail & Related papers (2024-08-18T18:21:24Z) - GQHAN: A Grover-inspired Quantum Hard Attention Network [53.96779043113156]
Grover-inspired Quantum Hard Attention Mechanism (GQHAM) is proposed.
GQHAN adeptly surmounts the non-differentiability hurdle, surpassing the efficacy of extant quantum soft self-attention mechanisms.
The proposal of GQHAN lays the foundation for future quantum computers to process large-scale data, and promotes the development of quantum computer vision.
arXiv Detail & Related papers (2024-01-25T11:11:16Z) - Predominant Aspects on Security for Quantum Machine Learning: Literature Review [0.0]
Quantum Machine Learning (QML) has emerged as a promising intersection of quantum computing and classical machine learning.
This paper discusses the question which security concerns and strengths are connected to QML by means of a systematic literature review.
arXiv Detail & Related papers (2024-01-15T15:35:43Z) - Quantum Federated Learning with Quantum Data [87.49715898878858]
Quantum machine learning (QML) has emerged as a promising field that leans on the developments in quantum computing to explore large complex machine learning problems.
This paper proposes the first fully quantum federated learning framework that can operate over quantum data and, thus, share the learning of quantum circuit parameters in a decentralized manner.
arXiv Detail & Related papers (2021-05-30T12:19:27Z) - Entangling Quantum Generative Adversarial Networks [53.25397072813582]
We propose a new type of architecture for quantum generative adversarial networks (entangling quantum GAN, EQ-GAN)
We show that EQ-GAN has additional robustness against coherent errors and demonstrate the effectiveness of EQ-GAN experimentally in a Google Sycamore superconducting quantum processor.
arXiv Detail & Related papers (2021-04-30T20:38:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.