Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object
Detectors in the Physical World
- URL: http://arxiv.org/abs/2201.08619v1
- Date: Fri, 21 Jan 2022 10:11:27 GMT
- Title: Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object
Detectors in the Physical World
- Authors: Hua Ma, Yinshan Li, Yansong Gao, Alsharif Abuadbba, Zhi Zhang, Anmin
Fu, Hyoungshick Kim, Said F. Al-Sarawi, Nepal Surya, Derek Abbott
- Abstract summary: This work demonstrates that existing object detectors are inherently susceptible to physical backdoor attacks.
We show that such a backdoor can be implanted from two exploitable attack scenarios into the object detector.
We evaluate three popular object detection algorithms: anchor-based Yolo-V3, Yolo-V4, and anchor-free CenterNet.
- Score: 20.385028861767218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models have been shown to be vulnerable to recent backdoor
attacks. A backdoored model behaves normally for inputs containing no
attacker-secretly-chosen trigger and maliciously for inputs with the trigger.
To date, backdoor attacks and countermeasures mainly focus on image
classification tasks. And most of them are implemented in the digital world
with digital triggers. Besides the classification tasks, object detection
systems are also considered as one of the basic foundations of computer vision
tasks. However, there is no investigation and understanding of the backdoor
vulnerability of the object detector, even in the digital world with digital
triggers. For the first time, this work demonstrates that existing object
detectors are inherently susceptible to physical backdoor attacks. We use a
natural T-shirt bought from a market as a trigger to enable the cloaking
effect--the person bounding-box disappears in front of the object detector. We
show that such a backdoor can be implanted from two exploitable attack
scenarios into the object detector, which is outsourced or fine-tuned through a
pretrained model. We have extensively evaluated three popular object detection
algorithms: anchor-based Yolo-V3, Yolo-V4, and anchor-free CenterNet. Building
upon 19 videos shot in real-world scenes, we confirm that the backdoor attack
is robust against various factors: movement, distance, angle, non-rigid
deformation, and lighting. Specifically, the attack success rate (ASR) in most
videos is 100% or close to it, while the clean data accuracy of the backdoored
model is the same as its clean counterpart. The latter implies that it is
infeasible to detect the backdoor behavior merely through a validation set. The
averaged ASR still remains sufficiently high to be 78% in the transfer learning
attack scenarios evaluated on CenterNet. See the demo video on
https://youtu.be/Q3HOF4OobbY.
Related papers
- On the Credibility of Backdoor Attacks Against Object Detectors in the Physical World [27.581277955830746]
We investigate the viability of physical object-triggered backdoor attacks in application settings.
We construct a new, cost-efficient attack method, dubbed MORPHING, incorporating the unique nature of detection tasks.
We release an extensive video test set of real-world backdoor attacks.
arXiv Detail & Related papers (2024-08-22T04:29:48Z) - Robust Backdoor Attacks on Object Detection in Real World [8.910615149604201]
We propose a variable-size backdoor trigger to adapt to the different sizes of attacked objects.
In addition, we proposed a backdoor training named malicious adversarial training, enabling the backdoor object detector to learn the feature of the trigger with physical noise.
arXiv Detail & Related papers (2023-09-16T11:09:08Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - BadDet: Backdoor Attacks on Object Detection [42.40418007499009]
We propose four kinds of backdoor attacks for object detection task.
A trigger can falsely generate an object of the target class.
A single trigger can change the predictions of all objects in an image to the target class.
arXiv Detail & Related papers (2022-05-28T18:02:11Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
Trained from Scratch [99.90716010490625]
Backdoor attackers tamper with training data to embed a vulnerability in models that are trained on that data.
This vulnerability is then activated at inference time by placing a "trigger" into the model's input.
We develop a new hidden trigger attack, Sleeper Agent, which employs gradient matching, data selection, and target model re-training during the crafting process.
arXiv Detail & Related papers (2021-06-16T17:09:55Z) - Backdoor Attack in the Physical World [49.64799477792172]
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs)
Most existing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images.
We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2021-04-06T08:37:33Z) - Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural
Networks [22.28270345106827]
Current state-of-the-art backdoor attacks require the adversary to modify the input, usually by adding a trigger to it, for the target model to activate the backdoor.
This added trigger not only increases the difficulty of launching the backdoor attack in the physical world, but also can be easily detected by multiple defense mechanisms.
We present the first triggerless backdoor attack against deep neural networks, where the adversary does not need to modify the input for triggering the backdoor.
arXiv Detail & Related papers (2020-10-07T09:01:39Z) - Backdoor Attacks Against Deep Learning Systems in the Physical World [23.14528973663843]
We study the feasibility of physical backdoor attacks under a variety of real-world conditions.
Physical backdoor attacks can be highly successful if they are carefully configured to overcome the constraints imposed by physical objects.
Four of today's state-of-the-art defenses against (digital) backdoors are ineffective against physical backdoors.
arXiv Detail & Related papers (2020-06-25T17:26:20Z) - Clean-Label Backdoor Attacks on Video Recognition Models [87.46539956587908]
We show that image backdoor attacks are far less effective on videos.
We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models.
Our proposed backdoor attack is resistant to state-of-the-art backdoor defense/detection methods.
arXiv Detail & Related papers (2020-03-06T04:51:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.