QDoor: Exploiting Approximate Synthesis for Backdoor Attacks in Quantum
Neural Networks
- URL: http://arxiv.org/abs/2307.09529v2
- Date: Fri, 16 Feb 2024 19:06:28 GMT
- Title: QDoor: Exploiting Approximate Synthesis for Backdoor Attacks in Quantum
Neural Networks
- Authors: Cheng Chu and Fan Chen and Philip Richerme and Lei Jiang
- Abstract summary: Quantum neural networks (QNNs) succeed in object recognition, natural language processing, and financial analysis.
approximate synthesis modifies the QNN circuit by reducing error-prone 2-qubit quantum gates.
We propose a novel and stealthy backdoor attack, QDoor, to achieve high attack success rate in approximately-synthesized QNN circuits.
- Score: 7.191064733894878
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Quantum neural networks (QNNs) succeed in object recognition, natural
language processing, and financial analysis. To maximize the accuracy of a QNN
on a Noisy Intermediate Scale Quantum (NISQ) computer, approximate synthesis
modifies the QNN circuit by reducing error-prone 2-qubit quantum gates. The
success of QNNs motivates adversaries to attack QNNs via backdoors. However,
na\"ively transplanting backdoors designed for classical neural networks to
QNNs yields only low attack success rate, due to the noises and approximate
synthesis on NISQ computers. Prior quantum circuit-based backdoors cannot
selectively attack some inputs or work with all types of encoding layers of a
QNN circuit. Moreover, it is easy to detect both transplanted and circuit-based
backdoors in a QNN.
In this paper, we propose a novel and stealthy backdoor attack, QDoor, to
achieve high attack success rate in approximately-synthesized QNN circuits by
weaponizing unitary differences between uncompiled QNNs and their synthesized
counterparts. QDoor trains a QNN behaving normally for all inputs with and
without a trigger. However, after approximate synthesis, the QNN circuit always
predicts any inputs with a trigger to a predefined class while still acts
normally for benign inputs. Compared to prior backdoor attacks, QDoor improves
the attack success rate by $13\times$ and the clean data accuracy by $65\%$ on
average. Furthermore, prior backdoor detection techniques cannot find QDoor
attacks in uncompiled QNN circuits.
Related papers
- CTRQNets & LQNets: Continuous Time Recurrent and Liquid Quantum Neural Networks [76.53016529061821]
Liquid Quantum Neural Network (LQNet) and Continuous Time Recurrent Quantum Neural Network (CTRQNet) developed.
LQNet and CTRQNet achieve accuracy increases as high as 40% on CIFAR 10 through binary classification.
arXiv Detail & Related papers (2024-08-28T00:56:03Z) - Backdoor Attacks against Hybrid Classical-Quantum Neural Networks [11.581538622210896]
Hybrid Quantum Neural Networks (HQNNs) represent a promising advancement in Quantum Machine Learning (QML)
We present the first systematic study of backdoor attacks on HQNNs.
arXiv Detail & Related papers (2024-07-23T08:25:34Z) - Exploiting the equivalence between quantum neural networks and perceptrons [2.598133279943607]
Quantum machine learning models based on parametrized quantum circuits are considered to be among the most promising candidates for applications on quantum devices.
We explore the expressivity and inductive bias of QNNs by exploiting an exact mapping from QNNs with inputs $x$ to classical perceptrons acting on $x otimes x$.
arXiv Detail & Related papers (2024-07-05T09:19:58Z) - QTrojan: A Circuit Backdoor Against Quantum Neural Networks [7.159964195773199]
We propose a circuit-level backdoor attack, textitQTrojan, against Quantum Neural Networks (QNNs)
QTrojan is implemented by few quantum gates inserted into the variational quantum circuit of the victim QNN.
arXiv Detail & Related papers (2023-02-16T05:06:10Z) - Knowledge Distillation in Quantum Neural Network using Approximate
Synthesis [5.833272638548153]
We introduce the concept of knowledge distillation in Quantum Neural Network (QNN) using approximate synthesis.
We demonstrate 71.4% reduction in circuit layers, and still achieve 16.2% better accuracy under noise.
arXiv Detail & Related papers (2022-07-05T04:09:43Z) - Toward Trainability of Deep Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) with random structures have poor trainability due to the exponentially vanishing gradient as the circuit depth and the qubit number increase.
We provide the first viable solution to the vanishing gradient problem for deep QNNs with theoretical guarantees.
arXiv Detail & Related papers (2021-12-30T10:27:08Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware [48.7072217216104]
Deep neural networks (DNNs) have structural malware' (i.e., compromised weights and activation pathways)
It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data)
Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, fingerprints' its nonlinearity, and allows us to detect backdoors (if present)
Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus
arXiv Detail & Related papers (2020-07-31T23:52:58Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - Defending against Backdoor Attack on Deep Neural Networks [98.45955746226106]
We study the so-called textitbackdoor attack, which injects a backdoor trigger to a small portion of training data.
Experiments show that our method could effectively decrease the attack success rate, and also hold a high classification accuracy for clean images.
arXiv Detail & Related papers (2020-02-26T02:03:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.