Mind Your Heart: Stealthy Backdoor Attack on Dynamic Deep Neural Network
in Edge Computing
- URL: http://arxiv.org/abs/2212.11751v1
- Date: Thu, 22 Dec 2022 14:43:48 GMT
- Title: Mind Your Heart: Stealthy Backdoor Attack on Dynamic Deep Neural Network
in Edge Computing
- Authors: Tian Dong, Ziyuan Zhang, Han Qiu, Tianwei Zhang, Hewu Li, Terry Wang
- Abstract summary: We propose a novel backdoor attack specifically on the dynamic multi-exit DNN models.
Our backdoor is stealthy to evade multiple state-of-the-art backdoor detection or removal methods.
- Score: 8.69143545268788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transforming off-the-shelf deep neural network (DNN) models into dynamic
multi-exit architectures can achieve inference and transmission efficiency by
fragmenting and distributing a large DNN model in edge computing scenarios
(e.g., edge devices and cloud servers). In this paper, we propose a novel
backdoor attack specifically on the dynamic multi-exit DNN models.
Particularly, we inject a backdoor by poisoning one DNN model's shallow hidden
layers targeting not this vanilla DNN model but only its dynamically deployed
multi-exit architectures. Our backdoored vanilla model behaves normally on
performance and cannot be activated even with the correct trigger. However, the
backdoor will be activated when the victims acquire this model and transform it
into a dynamic multi-exit architecture at their deployment. We conduct
extensive experiments to prove the effectiveness of our attack on three
structures (ResNet-56, VGG-16, and MobileNet) with four datasets (CIFAR-10,
SVHN, GTSRB, and Tiny-ImageNet) and our backdoor is stealthy to evade multiple
state-of-the-art backdoor detection or removal methods.
Related papers
- Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models [68.40324627475499]
We introduce a novel two-step defense framework named Expose Before You Defend.
EBYD unifies existing backdoor defense methods into a comprehensive defense system with enhanced performance.
We conduct extensive experiments on 10 image attacks and 6 text attacks across 2 vision datasets and 4 language datasets.
arXiv Detail & Related papers (2024-10-25T09:36:04Z) - Exploiting the Vulnerability of Large Language Models via Defense-Aware Architectural Backdoor [0.24335447922683692]
We introduce a new type of backdoor attack that conceals itself within the underlying model architecture.
The add-on modules of model architecture layers can detect the presence of input trigger tokens and modify layer weights.
We conduct extensive experiments to evaluate our attack methods using two model architecture settings on five different large language datasets.
arXiv Detail & Related papers (2024-09-03T14:54:16Z) - BaDExpert: Extracting Backdoor Functionality for Accurate Backdoor Input
Detection [42.021282816470794]
We present a novel defense, against backdoor attacks on Deep Neural Networks (DNNs)
Our defense falls within the category of post-development defenses that operate independently of how the model was generated.
We show the feasibility of devising highly accurate backdoor input detectors that filter out the backdoor inputs during model inference.
arXiv Detail & Related papers (2023-08-23T21:47:06Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Subnet Replacement: Deployment-stage backdoor attack against deep neural
networks in gray-box setting [3.69409109715429]
We study the realistic potential of conducting backdoor attack against deep neural networks (DNNs) during deployment stage.
We propose Subnet Replacement Attack (SRA), which is capable of embedding backdoor into DNNs by directly modifying a limited number of model parameters.
arXiv Detail & Related papers (2021-07-15T10:47:13Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object
Detection Models [16.7400223249581]
We propose a universal and physically realizable adversarial attack on a cascaded multi-modal deep learning network (DNN)
We show that the proposed universal multi-modal attack was successful in reducing the model's ability to detect a car by nearly 73%.
arXiv Detail & Related papers (2021-01-26T12:40:34Z) - Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware [48.7072217216104]
Deep neural networks (DNNs) have structural malware' (i.e., compromised weights and activation pathways)
It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data)
Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, fingerprints' its nonlinearity, and allows us to detect backdoors (if present)
Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus
arXiv Detail & Related papers (2020-07-31T23:52:58Z) - Defending against Backdoor Attack on Deep Neural Networks [98.45955746226106]
We study the so-called textitbackdoor attack, which injects a backdoor trigger to a small portion of training data.
Experiments show that our method could effectively decrease the attack success rate, and also hold a high classification accuracy for clean images.
arXiv Detail & Related papers (2020-02-26T02:03:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.