Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
- URL: http://arxiv.org/abs/2205.11819v1
- Date: Tue, 24 May 2022 06:33:31 GMT
- Title: Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
- Authors: Tianlong Chen, Zhenyu Zhang, Yihua Zhang, Shiyu Chang, Sijia Liu,
Zhangyang Wang
- Abstract summary: Trojan attacks threaten deep neural networks (DNNs) by poisoning them to behave normally on most samples, yet to produce manipulated results for inputs attached with a trigger.
We propose a novel Trojan network detection regime: first locating a "winning Trojan lottery ticket" which preserves nearly full Trojan information yet only chance-level performance on clean inputs; then recovering the trigger embedded in this already isolated subnetwork.
- Score: 126.15842954405929
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trojan attacks threaten deep neural networks (DNNs) by poisoning them to
behave normally on most samples, yet to produce manipulated results for inputs
attached with a particular trigger. Several works attempt to detect whether a
given DNN has been injected with a specific trigger during the training. In a
parallel line of research, the lottery ticket hypothesis reveals the existence
of sparse subnetworks which are capable of reaching competitive performance as
the dense network after independent training. Connecting these two dots, we
investigate the problem of Trojan DNN detection from the brand new lens of
sparsity, even when no clean training data is available. Our crucial
observation is that the Trojan features are significantly more stable to
network pruning than benign features. Leveraging that, we propose a novel
Trojan network detection regime: first locating a "winning Trojan lottery
ticket" which preserves nearly full Trojan information yet only chance-level
performance on clean inputs; then recovering the trigger embedded in this
already isolated subnetwork. Extensive experiments on various datasets, i.e.,
CIFAR-10, CIFAR-100, and ImageNet, with different network architectures, i.e.,
VGG-16, ResNet-18, ResNet-20s, and DenseNet-100 demonstrate the effectiveness
of our proposal. Codes are available at
https://github.com/VITA-Group/Backdoor-LTH.
Related papers
- Hardly Perceptible Trojan Attack against Neural Networks with Bit Flips [51.17948837118876]
We present hardly perceptible Trojan attack (HPT)
HPT crafts hardly perceptible Trojan images by utilizing the additive noise and per pixel flow field.
To achieve superior attack performance, we propose to jointly optimize bit flips, additive noise, and flow field.
arXiv Detail & Related papers (2022-07-27T09:56:17Z) - CatchBackdoor: Backdoor Detection via Critical Trojan Neural Path Fuzzing [16.44147178061005]
trojaned behaviors triggered by various trojan attacks can be attributed to the trojan path.
We propose CatchBackdoor, a detection method against trojan attacks.
arXiv Detail & Related papers (2021-12-24T13:57:03Z) - CLEANN: Accelerated Trojan Shield for Embedded Neural Networks [32.99727805086791]
We propose CLEANN, the first end-to-end framework that enables online mitigation of Trojans for embedded Deep Neural Network (DNN) applications.
A Trojan attack works by injecting a backdoor in the DNN while training; during inference, the Trojan can be activated by the specific backdoor trigger.
We leverage dictionary learning and sparse approximation to characterize the statistical behavior of benign data and identify Trojan triggers.
arXiv Detail & Related papers (2020-09-04T05:29:38Z) - Practical Detection of Trojan Neural Networks: Data-Limited and
Data-Free Cases [87.69818690239627]
We study the problem of the Trojan network (TrojanNet) detection in the data-scarce regime.
We propose a data-limited TrojanNet detector (TND), when only a few data samples are available for TrojanNet detection.
In addition, we propose a data-free TND, which can detect a TrojanNet without accessing any data samples.
arXiv Detail & Related papers (2020-07-31T02:00:38Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - Odyssey: Creation, Analysis and Detection of Trojan Models [91.13959405645959]
Trojan attacks interfere with the training pipeline by inserting triggers into some of the training samples and trains the model to act maliciously only for samples that contain the trigger.
Existing Trojan detectors make strong assumptions about the types of triggers and attacks.
We propose a detector that is based on the analysis of the intrinsic properties; that are affected due to the Trojaning process.
arXiv Detail & Related papers (2020-07-16T06:55:00Z) - An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
Networks [59.42357806777537]
trojan attack aims to attack deployed deep neural networks (DNNs) relying on hidden trigger patterns inserted by hackers.
We propose a training-free attack approach which is different from previous work, in which trojaned behaviors are injected by retraining model on a poisoned dataset.
The proposed TrojanNet has several nice properties including (1) it activates by tiny trigger patterns and keeps silent for other signals, (2) it is model-agnostic and could be injected into most DNNs, dramatically expanding its attack scenarios, and (3) the training-free mechanism saves massive training efforts compared to conventional trojan attack methods.
arXiv Detail & Related papers (2020-06-15T04:58:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.