QTrojan: A Circuit Backdoor Against Quantum Neural Networks
- URL: http://arxiv.org/abs/2302.08090v1
- Date: Thu, 16 Feb 2023 05:06:10 GMT
- Title: QTrojan: A Circuit Backdoor Against Quantum Neural Networks
- Authors: Cheng Chu, Lei Jiang, Martin Swany, Fan Chen
- Abstract summary: We propose a circuit-level backdoor attack, textitQTrojan, against Quantum Neural Networks (QNNs)
QTrojan is implemented by few quantum gates inserted into the variational quantum circuit of the victim QNN.
- Score: 7.159964195773199
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a circuit-level backdoor attack, \textit{QTrojan}, against Quantum
Neural Networks (QNNs) in this paper. QTrojan is implemented by few quantum
gates inserted into the variational quantum circuit of the victim QNN. QTrojan
is much stealthier than a prior Data-Poisoning-based Backdoor Attack (DPBA),
since it does not embed any trigger in the inputs of the victim QNN or require
the access to original training datasets. Compared to a DPBA, QTrojan improves
the clean data accuracy by 21\% and the attack success rate by 19.9\%.
Related papers
- Hardware Trojans in Quantum Circuits, Their Impacts, and Defense [2.089191490381739]
Circuits with a short depth and lower gate count can yield the correct solution more often than the variant with a higher gate count and depth.
Many 3rd party compilers are being developed for lower compilation time, reduced circuit depth, and lower gate count for large quantum circuits.
arXiv Detail & Related papers (2024-02-02T16:44:52Z) - QDoor: Exploiting Approximate Synthesis for Backdoor Attacks in Quantum
Neural Networks [7.191064733894878]
Quantum neural networks (QNNs) succeed in object recognition, natural language processing, and financial analysis.
approximate synthesis modifies the QNN circuit by reducing error-prone 2-qubit quantum gates.
We propose a novel and stealthy backdoor attack, QDoor, to achieve high attack success rate in approximately-synthesized QNN circuits.
arXiv Detail & Related papers (2023-07-13T18:26:19Z) - TrojanNet: Detecting Trojans in Quantum Circuits using Machine Learning [5.444459446244819]
TrojanNet is a novel approach to enhance the security of quantum circuits by detecting and classifying Trojan-inserted circuits.
We generate 12 diverse datasets by introducing variations in Trojan gate types, the number of gates, insertion locations, and compilers.
Experimental results showcase an average accuracy of 98.80% and an average F1-score of 98.53% in effectively detecting and classifying Trojan-inserted QAOA circuits.
arXiv Detail & Related papers (2023-06-29T05:56:05Z) - FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases [50.065022493142116]
Trojan attack on deep neural networks, also known as backdoor attack, is a typical threat to artificial intelligence.
FreeEagle is the first data-free backdoor detection method that can effectively detect complex backdoor attacks.
arXiv Detail & Related papers (2023-02-28T11:31:29Z) - Trap and Replace: Defending Backdoor Attacks by Trapping Them into an
Easy-to-Replace Subnetwork [105.0735256031911]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
We propose a brand-new backdoor defense strategy, which makes it much easier to remove the harmful influence of backdoor samples.
We evaluate our method against ten different backdoor attacks.
arXiv Detail & Related papers (2022-10-12T17:24:01Z) - Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free [126.15842954405929]
Trojan attacks threaten deep neural networks (DNNs) by poisoning them to behave normally on most samples, yet to produce manipulated results for inputs attached with a trigger.
We propose a novel Trojan network detection regime: first locating a "winning Trojan lottery ticket" which preserves nearly full Trojan information yet only chance-level performance on clean inputs; then recovering the trigger embedded in this already isolated subnetwork.
arXiv Detail & Related papers (2022-05-24T06:33:31Z) - Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural
Networks [24.532269628999025]
Backdoor (Trojan) attacks are emerging threats against deep neural networks (DNN)
In this paper, we propose an "in-flight" defense against backdoor attacks on image classification.
arXiv Detail & Related papers (2021-12-06T20:52:00Z) - Practical Detection of Trojan Neural Networks: Data-Limited and
Data-Free Cases [87.69818690239627]
We study the problem of the Trojan network (TrojanNet) detection in the data-scarce regime.
We propose a data-limited TrojanNet detector (TND), when only a few data samples are available for TrojanNet detection.
In addition, we propose a data-free TND, which can detect a TrojanNet without accessing any data samples.
arXiv Detail & Related papers (2020-07-31T02:00:38Z) - An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
Networks [59.42357806777537]
trojan attack aims to attack deployed deep neural networks (DNNs) relying on hidden trigger patterns inserted by hackers.
We propose a training-free attack approach which is different from previous work, in which trojaned behaviors are injected by retraining model on a poisoned dataset.
The proposed TrojanNet has several nice properties including (1) it activates by tiny trigger patterns and keeps silent for other signals, (2) it is model-agnostic and could be injected into most DNNs, dramatically expanding its attack scenarios, and (3) the training-free mechanism saves massive training efforts compared to conventional trojan attack methods.
arXiv Detail & Related papers (2020-06-15T04:58:28Z) - Defending against Backdoor Attack on Deep Neural Networks [98.45955746226106]
We study the so-called textitbackdoor attack, which injects a backdoor trigger to a small portion of training data.
Experiments show that our method could effectively decrease the attack success rate, and also hold a high classification accuracy for clean images.
arXiv Detail & Related papers (2020-02-26T02:03:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.