Robust Backdoor Attacks against Deep Neural Networks in Real Physical
World
- URL: http://arxiv.org/abs/2104.07395v1
- Date: Thu, 15 Apr 2021 11:51:14 GMT
- Title: Robust Backdoor Attacks against Deep Neural Networks in Real Physical
World
- Authors: Mingfu Xue, Can He, Shichang Sun, Jian Wang, Weiqiang Liu
- Abstract summary: Deep neural networks (DNN) have been widely deployed in various practical applications.
Almost all the existing backdoor works focused on the digital domain, while few studies investigate the backdoor attacks in real physical world.
We propose a robust physical backdoor attack method, PTB, to implement the backdoor attacks against deep learning models in the physical world.
- Score: 6.622414121450076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNN) have been widely deployed in various practical
applications. However, many researches indicated that DNN is vulnerable to
backdoor attacks. The attacker can create a hidden backdoor in target DNN
model, and trigger the malicious behaviors by submitting specific backdoor
instance. However, almost all the existing backdoor works focused on the
digital domain, while few studies investigate the backdoor attacks in real
physical world. Restricted to a variety of physical constrains, the performance
of backdoor attacks in the real world will be severely degraded. In this paper,
we propose a robust physical backdoor attack method, PTB (physical
transformations for backdoors), to implement the backdoor attacks against deep
learning models in the physical world. Specifically, in the training phase, we
perform a series of physical transformations on these injected backdoor
instances at each round of model training, so as to simulate various
transformations that a backdoor may experience in real world, thus improves its
physical robustness. Experimental results on the state-of-the-art face
recognition model show that, compared with the methods that without PTB, the
proposed attack method can significantly improve the performance of backdoor
attacks in real physical world. Under various complex physical conditions, by
injecting only a very small ratio (0.5%) of backdoor instances, the success
rate of physical backdoor attacks with the PTB method on VGGFace is 82%, while
the attack success rate of backdoor attacks without the proposed PTB method is
lower than 11%. Meanwhile, the normal performance of target DNN model has not
been affected. This paper is the first work on the robustness of physical
backdoor attacks, and is hopeful for providing guideline for the subsequent
physical backdoor works.
Related papers
- Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor [63.84477483795964]
Data-poisoning backdoor attacks are serious security threats to machine learning models.
In this paper, we focus on in-training backdoor defense, aiming to train a clean model even when the dataset may be potentially poisoned.
We propose a novel defense approach called PDB (Proactive Defensive Backdoor)
arXiv Detail & Related papers (2024-05-25T07:52:26Z) - Robust Backdoor Attacks on Object Detection in Real World [8.910615149604201]
We propose a variable-size backdoor trigger to adapt to the different sizes of attacked objects.
In addition, we proposed a backdoor training named malicious adversarial training, enabling the backdoor object detector to learn the feature of the trigger with physical noise.
arXiv Detail & Related papers (2023-09-16T11:09:08Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Backdoor Attack in the Physical World [49.64799477792172]
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs)
Most existing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images.
We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2021-04-06T08:37:33Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition
Systems [0.0]
We propose a novel black-box backdoor attack technique on face recognition systems.
We show that the backdoor trigger can be quite effective, where the attack success rate can be up to $88%$.
We highlight that our study revealed a new physical backdoor attack, which calls for the attention of the security issue of the existing face recognition/verification techniques.
arXiv Detail & Related papers (2020-09-15T11:50:29Z) - Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks [46.99548490594115]
A backdoor attack installs a backdoor into the victim model by injecting a backdoor pattern into a small proportion of the training data.
We propose reflection backdoor (Refool) to plant reflections as backdoor into a victim model.
We demonstrate on 3 computer vision tasks and 5 datasets that, Refool can attack state-of-the-art DNNs with high success rate.
arXiv Detail & Related papers (2020-07-05T13:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.