Attacking by Aligning: Clean-Label Backdoor Attacks on Object Detection
- URL: http://arxiv.org/abs/2307.10487v2
- Date: Sat, 16 Sep 2023 16:42:19 GMT
- Title: Attacking by Aligning: Clean-Label Backdoor Attacks on Object Detection
- Authors: Yize Cheng, Wenbin Hu, Minhao Cheng
- Abstract summary: Deep neural networks (DNNs) have shown unprecedented success in object detection tasks.
Backdoor attacks on object detection tasks have not been properly investigated and explored.
We propose a simple yet effective backdoor attack method against object detection without modifying the ground truth annotations.
- Score: 24.271795745084123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have shown unprecedented success in object
detection tasks. However, it was also discovered that DNNs are vulnerable to
multiple kinds of attacks, including Backdoor Attacks. Through the attack, the
attacker manages to embed a hidden backdoor into the DNN such that the model
behaves normally on benign data samples, but makes attacker-specified judgments
given the occurrence of a predefined trigger. Although numerous backdoor
attacks have been experimented on image classification, backdoor attacks on
object detection tasks have not been properly investigated and explored. As
object detection has been adopted as an important module in multiple
security-sensitive applications such as autonomous driving, backdoor attacks on
object detection could pose even more severe threats. Inspired by the inherent
property of deep learning-based object detectors, we propose a simple yet
effective backdoor attack method against object detection without modifying the
ground truth annotations, specifically focusing on the object disappearance
attack and object generation attack. Extensive experiments and ablation studies
prove the effectiveness of our attack on the benchmark object detection dataset
MSCOCO2017, on which we achieve an attack success rate of more than 92% with a
poison rate of only 5%.
Related papers
- On the Credibility of Backdoor Attacks Against Object Detectors in the Physical World [27.581277955830746]
We investigate the viability of physical object-triggered backdoor attacks in application settings.
We construct a new, cost-efficient attack method, dubbed MORPHING, incorporating the unique nature of detection tasks.
We release an extensive video test set of real-world backdoor attacks.
arXiv Detail & Related papers (2024-08-22T04:29:48Z) - Mask-based Invisible Backdoor Attacks on Object Detection [0.0]
Deep learning models are vulnerable to backdoor attacks.
In this study, we propose an effective invisible backdoor attack on object detection utilizing a mask-based approach.
arXiv Detail & Related papers (2024-03-20T12:27:30Z) - SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection [8.178238811631093]
We propose the first backdoor attack designed for object detection tasks in SSL scenarios, called Object Transform Attack (SSL-OTA)
SSL-OTA employs a trigger capable of altering predictions of the target object to the desired category.
We conduct extensive experiments on benchmark datasets, demonstrating the effectiveness of our proposed attack and its resistance to potential defenses.
arXiv Detail & Related papers (2023-12-30T04:21:12Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases [50.065022493142116]
Trojan attack on deep neural networks, also known as backdoor attack, is a typical threat to artificial intelligence.
FreeEagle is the first data-free backdoor detection method that can effectively detect complex backdoor attacks.
arXiv Detail & Related papers (2023-02-28T11:31:29Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BadDet: Backdoor Attacks on Object Detection [42.40418007499009]
We propose four kinds of backdoor attacks for object detection task.
A trigger can falsely generate an object of the target class.
A single trigger can change the predictions of all objects in an image to the target class.
arXiv Detail & Related papers (2022-05-28T18:02:11Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Rethinking the Trigger of Backdoor Attack [83.98031510668619]
Currently, most of existing backdoor attacks adopted the setting of emphstatic trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area.
We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2020-04-09T17:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.