An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
Networks
- URL: http://arxiv.org/abs/2006.08131v2
- Date: Thu, 18 Jun 2020 04:27:39 GMT
- Title: An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
Networks
- Authors: Ruixiang Tang, Mengnan Du, Ninghao Liu, Fan Yang, Xia Hu
- Abstract summary: trojan attack aims to attack deployed deep neural networks (DNNs) relying on hidden trigger patterns inserted by hackers.
We propose a training-free attack approach which is different from previous work, in which trojaned behaviors are injected by retraining model on a poisoned dataset.
The proposed TrojanNet has several nice properties including (1) it activates by tiny trigger patterns and keeps silent for other signals, (2) it is model-agnostic and could be injected into most DNNs, dramatically expanding its attack scenarios, and (3) the training-free mechanism saves massive training efforts compared to conventional trojan attack methods.
- Score: 59.42357806777537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the widespread use of deep neural networks (DNNs) in high-stake
applications, the security problem of the DNN models has received extensive
attention. In this paper, we investigate a specific security problem called
trojan attack, which aims to attack deployed DNN systems relying on the hidden
trigger patterns inserted by malicious hackers. We propose a training-free
attack approach which is different from previous work, in which trojaned
behaviors are injected by retraining model on a poisoned dataset. Specifically,
we do not change parameters in the original model but insert a tiny trojan
module (TrojanNet) into the target model. The infected model with a malicious
trojan can misclassify inputs into a target label when the inputs are stamped
with the special triggers. The proposed TrojanNet has several nice properties
including (1) it activates by tiny trigger patterns and keeps silent for other
signals, (2) it is model-agnostic and could be injected into most DNNs,
dramatically expanding its attack scenarios, and (3) the training-free
mechanism saves massive training efforts comparing to conventional trojan
attack methods. The experimental results show that TrojanNet can inject the
trojan into all labels simultaneously (all-label trojan attack) and achieves
100% attack success rate without affecting model accuracy on original tasks.
Experimental analysis further demonstrates that state-of-the-art trojan
detection algorithms fail to detect TrojanNet attack. The code is available at
https://github.com/trx14/TrojanNet.
Related papers
- Attention-Enhancing Backdoor Attacks Against BERT-based Models [54.070555070629105]
Investigating the strategies of backdoor attacks will help to understand the model's vulnerability.
We propose a novel Trojan Attention Loss (TAL) which enhances the Trojan behavior by directly manipulating the attention patterns.
arXiv Detail & Related papers (2023-10-23T01:24:56Z) - Hardly Perceptible Trojan Attack against Neural Networks with Bit Flips [51.17948837118876]
We present hardly perceptible Trojan attack (HPT)
HPT crafts hardly perceptible Trojan images by utilizing the additive noise and per pixel flow field.
To achieve superior attack performance, we propose to jointly optimize bit flips, additive noise, and flow field.
arXiv Detail & Related papers (2022-07-27T09:56:17Z) - Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free [126.15842954405929]
Trojan attacks threaten deep neural networks (DNNs) by poisoning them to behave normally on most samples, yet to produce manipulated results for inputs attached with a trigger.
We propose a novel Trojan network detection regime: first locating a "winning Trojan lottery ticket" which preserves nearly full Trojan information yet only chance-level performance on clean inputs; then recovering the trigger embedded in this already isolated subnetwork.
arXiv Detail & Related papers (2022-05-24T06:33:31Z) - CatchBackdoor: Backdoor Detection via Critical Trojan Neural Path Fuzzing [16.44147178061005]
trojaned behaviors triggered by various trojan attacks can be attributed to the trojan path.
We propose CatchBackdoor, a detection method against trojan attacks.
arXiv Detail & Related papers (2021-12-24T13:57:03Z) - A Synergetic Attack against Neural Network Classifiers combining
Backdoor and Adversarial Examples [11.534521802321976]
We show how to jointly exploit adversarial perturbation and model poisoning vulnerabilities to practically launch a new stealthy attack, dubbed AdvTrojan.
AdvTrojan is stealthy because it can be activated only when: 1) a carefully crafted adversarial perturbation is injected into the input examples during inference, and 2) a Trojan backdoor is implanted during the training process of the model.
arXiv Detail & Related papers (2021-09-03T02:18:57Z) - CLEANN: Accelerated Trojan Shield for Embedded Neural Networks [32.99727805086791]
We propose CLEANN, the first end-to-end framework that enables online mitigation of Trojans for embedded Deep Neural Network (DNN) applications.
A Trojan attack works by injecting a backdoor in the DNN while training; during inference, the Trojan can be activated by the specific backdoor trigger.
We leverage dictionary learning and sparse approximation to characterize the statistical behavior of benign data and identify Trojan triggers.
arXiv Detail & Related papers (2020-09-04T05:29:38Z) - Practical Detection of Trojan Neural Networks: Data-Limited and
Data-Free Cases [87.69818690239627]
We study the problem of the Trojan network (TrojanNet) detection in the data-scarce regime.
We propose a data-limited TrojanNet detector (TND), when only a few data samples are available for TrojanNet detection.
In addition, we propose a data-free TND, which can detect a TrojanNet without accessing any data samples.
arXiv Detail & Related papers (2020-07-31T02:00:38Z) - Odyssey: Creation, Analysis and Detection of Trojan Models [91.13959405645959]
Trojan attacks interfere with the training pipeline by inserting triggers into some of the training samples and trains the model to act maliciously only for samples that contain the trigger.
Existing Trojan detectors make strong assumptions about the types of triggers and attacks.
We propose a detector that is based on the analysis of the intrinsic properties; that are affected due to the Trojaning process.
arXiv Detail & Related papers (2020-07-16T06:55:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.