Robust Backdoor Attacks on Object Detection in Real World
- URL: http://arxiv.org/abs/2309.08953v1
- Date: Sat, 16 Sep 2023 11:09:08 GMT
- Title: Robust Backdoor Attacks on Object Detection in Real World
- Authors: Yaguan Qian, Boyuan Ji, Shuke He, Shenhui Huang, Xiang Ling, Bin Wang,
Wei Wang
- Abstract summary: We propose a variable-size backdoor trigger to adapt to the different sizes of attacked objects.
In addition, we proposed a backdoor training named malicious adversarial training, enabling the backdoor object detector to learn the feature of the trigger with physical noise.
- Score: 8.910615149604201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models are widely deployed in many applications, such as object
detection in various security fields. However, these models are vulnerable to
backdoor attacks. Most backdoor attacks were intensively studied on classified
models, but little on object detection. Previous works mainly focused on the
backdoor attack in the digital world, but neglect the real world. Especially,
the backdoor attack's effect in the real world will be easily influenced by
physical factors like distance and illumination. In this paper, we proposed a
variable-size backdoor trigger to adapt to the different sizes of attacked
objects, overcoming the disturbance caused by the distance between the viewing
point and attacked object. In addition, we proposed a backdoor training named
malicious adversarial training, enabling the backdoor object detector to learn
the feature of the trigger with physical noise. The experiment results show
this robust backdoor attack (RBA) could enhance the attack success rate in the
real world.
Related papers
- Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor [63.84477483795964]
Data-poisoning backdoor attacks are serious security threats to machine learning models.
In this paper, we focus on in-training backdoor defense, aiming to train a clean model even when the dataset may be potentially poisoned.
We propose a novel defense approach called PDB (Proactive Defensive Backdoor)
arXiv Detail & Related papers (2024-05-25T07:52:26Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object
Detectors in the Physical World [20.385028861767218]
This work demonstrates that existing object detectors are inherently susceptible to physical backdoor attacks.
We show that such a backdoor can be implanted from two exploitable attack scenarios into the object detector.
We evaluate three popular object detection algorithms: anchor-based Yolo-V3, Yolo-V4, and anchor-free CenterNet.
arXiv Detail & Related papers (2022-01-21T10:11:27Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Robust Backdoor Attacks against Deep Neural Networks in Real Physical
World [6.622414121450076]
Deep neural networks (DNN) have been widely deployed in various practical applications.
Almost all the existing backdoor works focused on the digital domain, while few studies investigate the backdoor attacks in real physical world.
We propose a robust physical backdoor attack method, PTB, to implement the backdoor attacks against deep learning models in the physical world.
arXiv Detail & Related papers (2021-04-15T11:51:14Z) - Backdoor Attack in the Physical World [49.64799477792172]
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs)
Most existing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images.
We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2021-04-06T08:37:33Z) - Backdoor Attacks Against Deep Learning Systems in the Physical World [23.14528973663843]
We study the feasibility of physical backdoor attacks under a variety of real-world conditions.
Physical backdoor attacks can be highly successful if they are carefully configured to overcome the constraints imposed by physical objects.
Four of today's state-of-the-art defenses against (digital) backdoors are ineffective against physical backdoors.
arXiv Detail & Related papers (2020-06-25T17:26:20Z) - Clean-Label Backdoor Attacks on Video Recognition Models [87.46539956587908]
We show that image backdoor attacks are far less effective on videos.
We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models.
Our proposed backdoor attack is resistant to state-of-the-art backdoor defense/detection methods.
arXiv Detail & Related papers (2020-03-06T04:51:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.