SQUASH: A SWAP-Based Quantum Attack to Sabotage Hybrid Quantum Neural Networks
- URL: http://arxiv.org/abs/2506.24081v1
- Date: Mon, 30 Jun 2025 17:36:31 GMT
- Title: SQUASH: A SWAP-Based Quantum Attack to Sabotage Hybrid Quantum Neural Networks
- Authors: Rahul Kumar, Wenqi Wei, Ying Mao, Junaid Farooq, Ying Wang, Juntao Chen,
- Abstract summary: We propose a circuit-level attack to sabotage Hybrid Quantum Neural Networks (HQNNs) for classification tasks.<n> SQUASH is executed by inserting SWAP gate(s) into the variational quantum circuit of the victim HQNN.<n>We show that SQUASH significantly degrades classification performance, with untargeted SWAP attacks reducing accuracy by up to 74.08% and targeted SWAP attacks reducing target class accuracy by up to 79.78%.
- Score: 12.479466545032919
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a circuit-level attack, SQUASH, a SWAP-Based Quantum Attack to sabotage Hybrid Quantum Neural Networks (HQNNs) for classification tasks. SQUASH is executed by inserting SWAP gate(s) into the variational quantum circuit of the victim HQNN. Unlike conventional noise-based or adversarial input attacks, SQUASH directly manipulates the circuit structure, leading to qubit misalignment and disrupting quantum state evolution. This attack is highly stealthy, as it does not require access to training data or introduce detectable perturbations in input states. Our results demonstrate that SQUASH significantly degrades classification performance, with untargeted SWAP attacks reducing accuracy by up to 74.08\% and targeted SWAP attacks reducing target class accuracy by up to 79.78\%. These findings reveal a critical vulnerability in HQNN implementations, underscoring the need for more resilient architectures against circuit-level adversarial interventions.
Related papers
- Adversarial Threats in Quantum Machine Learning: A Survey of Attacks and Defenses [2.089191490381739]
Quantum Machine Learning (QML) integrates quantum computing with classical machine learning to solve classification, regression and generative tasks.<n>This chapter examines adversarial threats unique to QML systems, focusing on vulnerabilities in cloud-based deployments, hybrid architectures, and quantum generative models.
arXiv Detail & Related papers (2025-06-27T01:19:49Z) - Fooling the Decoder: An Adversarial Attack on Quantum Error Correction [49.48516314472825]
In this work, we target a basic RL surface code decoder (DeepQ) to create the first adversarial attack on quantum error correction.<n>We demonstrate an attack that reduces the logical qubit lifetime in memory experiments by up to five orders of magnitude.<n>This attack highlights the susceptibility of machine learning-based QEC and underscores the importance of further research into robust QEC methods.
arXiv Detail & Related papers (2025-04-28T10:10:05Z) - Classical Autoencoder Distillation of Quantum Adversarial Manipulations [1.4598877063396687]
We report a new technique for the distillation of quantum manipulated image datasets by using classical autoencoders.<n>Our work highlights a promising pathway to achieve fully robust quantum machine learning in both classical and quantum adversarial scenarios.
arXiv Detail & Related papers (2025-04-12T13:51:08Z) - SWAP Attack: Stealthy Side-Channel Attack on Multi-Tenant Quantum Cloud System [3.4804333771236875]
Crosstalk on shared quantum devices allows adversaries to interfere with victim circuits within a neighborhood.<n>We show that SWAP-based side-channel attack operates in both active and passive modes, as verified on real IBM quantum devices.<n>Our work highlights the urgent need for robust security measures to safeguard quantum computations against emerging threats.
arXiv Detail & Related papers (2025-02-14T12:25:08Z) - Deep-learning-based continuous attacks on quantum key distribution protocols [0.0]
In this paper, we design a new individual attack scheme that exploits continuous measurement together with the powerful pattern recognition capacities of deep recurrent neural networks.<n>Our attack increases only slightly the Quantum Bit Error Rate (QBER) of a noisy channel and allows the spy to infer a significant part of the sifted key.
arXiv Detail & Related papers (2024-08-22T17:39:26Z) - Mitigation of Channel Tampering Attacks in Continuous-Variable Quantum Key Distribution [8.840486611542584]
In CV-QKD, vulnerability to communication disruption persists from potential adversaries employing Denial-of-Service (DoS) attacks.
Inspired by DoS attacks, this paper introduces a novel threat in CV-QKD called the Channel Amplification (CA) attack.
To counter this threat, we propose a detection and mitigation strategy.
arXiv Detail & Related papers (2024-01-29T05:48:51Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Defence against adversarial attacks using classical and quantum-enhanced
Boltzmann machines [64.62510681492994]
generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations.
We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset.
arXiv Detail & Related papers (2020-12-21T19:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.