Backdoor Attack in the Physical World
- URL: http://arxiv.org/abs/2104.02361v1
- Date: Tue, 6 Apr 2021 08:37:33 GMT
- Title: Backdoor Attack in the Physical World
- Authors: Yiming Li, Tongqing Zhai, Yong Jiang, Zhifeng Li, Shu-Tao Xia
- Abstract summary: Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs)
Most existing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images.
We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
- Score: 49.64799477792172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Backdoor attack intends to inject hidden backdoor into the deep neural
networks (DNNs), such that the prediction of infected models will be
maliciously changed if the hidden backdoor is activated by the attacker-defined
trigger. Currently, most existing backdoor attacks adopted the setting of
static trigger, $i.e.,$ triggers across the training and testing images follow
the same appearance and are located in the same area. In this paper, we revisit
this attack paradigm by analyzing trigger characteristics. We demonstrate that
this attack paradigm is vulnerable when the trigger in testing images is not
consistent with the one used for training. As such, those attacks are far less
effective in the physical world, where the location and appearance of the
trigger in the digitized image may be different from that of the one used for
training. Moreover, we also discuss how to alleviate such vulnerability. We
hope that this work could inspire more explorations on backdoor properties, to
help the design of more advanced backdoor attack and defense methods.
Related papers
- Rethinking Backdoor Attacks [122.1008188058615]
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.
Defending against such attacks typically involves viewing these inserted examples as outliers in the training set and using techniques from robust statistics to detect and remove them.
We show that without structural information about the training data distribution, backdoor attacks are indistinguishable from naturally-occurring features in the data.
arXiv Detail & Related papers (2023-07-19T17:44:54Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Look, Listen, and Attack: Backdoor Attacks Against Video Action
Recognition [53.720010650445516]
We show that poisoned-label image backdoor attacks could be extended temporally in two ways, statically and dynamically.
In addition, we explore natural video backdoors to highlight the seriousness of this vulnerability in the video domain.
And, for the first time, we study multi-modal (audiovisual) backdoor attacks against video action recognition models.
arXiv Detail & Related papers (2023-01-03T07:40:28Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Deep Feature Space Trojan Attack of Neural Networks by Controlled
Detoxification [21.631699720855995]
Trojan (backdoor) attack is a form of adversarial attack on deep neural networks.
We propose a novel deep feature space trojan attack with five characteristics.
arXiv Detail & Related papers (2020-12-21T09:46:12Z) - Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural
Networks [22.28270345106827]
Current state-of-the-art backdoor attacks require the adversary to modify the input, usually by adding a trigger to it, for the target model to activate the backdoor.
This added trigger not only increases the difficulty of launching the backdoor attack in the physical world, but also can be easily detected by multiple defense mechanisms.
We present the first triggerless backdoor attack against deep neural networks, where the adversary does not need to modify the input for triggering the backdoor.
arXiv Detail & Related papers (2020-10-07T09:01:39Z) - Rethinking the Trigger of Backdoor Attack [83.98031510668619]
Currently, most of existing backdoor attacks adopted the setting of emphstatic trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area.
We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2020-04-09T17:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.