BATT: Backdoor Attack with Transformation-based Triggers
- URL: http://arxiv.org/abs/2211.01806v1
- Date: Wed, 2 Nov 2022 16:03:43 GMT
- Title: BATT: Backdoor Attack with Transformation-based Triggers
- Authors: Tong Xu, Yiming Li, Yong Jiang, Shu-Tao Xia
- Abstract summary: Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
- Score: 72.61840273364311
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) are vulnerable to backdoor attacks. The backdoor
adversaries intend to maliciously control the predictions of attacked DNNs by
injecting hidden backdoors that can be activated by adversary-specified trigger
patterns during the training process. One recent research revealed that most of
the existing attacks failed in the real physical world since the trigger
contained in the digitized test samples may be different from that of the one
used for training. Accordingly, users can adopt spatial transformations as the
image pre-processing to deactivate hidden backdoors. In this paper, we explore
the previous findings from another side. We exploit classical spatial
transformations (i.e. rotation and translation) with the specific parameter as
trigger patterns to design a simple yet effective poisoning-based backdoor
attack. For example, only images rotated to a particular angle can activate the
embedded backdoor of attacked DNNs. Extensive experiments are conducted,
verifying the effectiveness of our attack under both digital and physical
settings and its resistance to existing backdoor defenses.
Related papers
- Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural
Networks [24.532269628999025]
Backdoor (Trojan) attacks are emerging threats against deep neural networks (DNN)
In this paper, we propose an "in-flight" defense against backdoor attacks on image classification.
arXiv Detail & Related papers (2021-12-06T20:52:00Z) - Backdoor Attack in the Physical World [49.64799477792172]
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs)
Most existing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images.
We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2021-04-06T08:37:33Z) - Rethinking the Trigger of Backdoor Attack [83.98031510668619]
Currently, most of existing backdoor attacks adopted the setting of emphstatic trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area.
We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2020-04-09T17:19:37Z) - Defending against Backdoor Attack on Deep Neural Networks [98.45955746226106]
We study the so-called textitbackdoor attack, which injects a backdoor trigger to a small portion of training data.
Experiments show that our method could effectively decrease the attack success rate, and also hold a high classification accuracy for clean images.
arXiv Detail & Related papers (2020-02-26T02:03:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.