Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models
- URL: http://arxiv.org/abs/2312.03419v3
- Date: Fri, 15 Mar 2024 12:30:00 GMT
- Title: Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models
- Authors: Sze Jue Yang, Chinh D. La, Quang H. Nguyen, Kok-Seng Wong, Anh Tuan Tran, Chee Seng Chan, Khoa D. Doan,
- Abstract summary: This paper unleashes a recipe that empowers backdoor researchers to create a malicious, physical backdoor dataset.
It effectively mitigates the perceived complexity associated with creating a physical backdoor dataset.
Experiment results show that datasets created by our "recipe" enable adversaries to achieve an impressive attack success rate.
- Score: 18.538988264963645
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Backdoor attacks, representing an emerging threat to the integrity of deep neural networks, have garnered significant attention due to their ability to compromise deep learning systems clandestinely. While numerous backdoor attacks occur within the digital realm, their practical implementation in real-world prediction systems remains limited and vulnerable to disturbances in the physical world. Consequently, this limitation has given rise to the development of physical backdoor attacks, where trigger objects manifest as physical entities within the real world. However, creating the requisite dataset to train or evaluate a physical backdoor model is a daunting task, limiting the backdoor researchers and practitioners from studying such physical attack scenarios. This paper unleashes a recipe that empowers backdoor researchers to effortlessly create a malicious, physical backdoor dataset based on advances in generative modeling. Particularly, this recipe involves 3 automatic modules: suggesting the suitable physical triggers, generating the poisoned candidate samples (either by synthesizing new samples or editing existing clean samples), and finally refining for the most plausible ones. As such, it effectively mitigates the perceived complexity associated with creating a physical backdoor dataset, transforming it from a daunting task into an attainable objective. Extensive experiment results show that datasets created by our "recipe" enable adversaries to achieve an impressive attack success rate on real physical world data and exhibit similar properties compared to previous physical backdoor attack studies. This paper offers researchers a valuable toolkit for studies of physical backdoors, all within the confines of their laboratories.
Related papers
- Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - Mellivora Capensis: A Backdoor-Free Training Framework on the Poisoned Dataset without Auxiliary Data [29.842087372804905]
This paper addresses the challenges of backdoor attack countermeasures in real-world scenarios.
We propose a robust and clean-data-free backdoor defense framework, namely Mellivora Capensis (textttMeCa), which enables the model trainer to train a clean model on the poisoned dataset.
arXiv Detail & Related papers (2024-05-21T12:20:19Z) - Setting the Trap: Capturing and Defeating Backdoors in Pretrained
Language Models through Honeypots [68.84056762301329]
Recent research has exposed the susceptibility of pretrained language models (PLMs) to backdoor attacks.
We propose and integrate a honeypot module into the original PLM to absorb backdoor information exclusively.
Our design is motivated by the observation that lower-layer representations in PLMs carry sufficient backdoor features.
arXiv Detail & Related papers (2023-10-28T08:21:16Z) - FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases [50.065022493142116]
Trojan attack on deep neural networks, also known as backdoor attack, is a typical threat to artificial intelligence.
FreeEagle is the first data-free backdoor detection method that can effectively detect complex backdoor attacks.
arXiv Detail & Related papers (2023-02-28T11:31:29Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Natural Backdoor Datasets [27.406510934213387]
Physical backdoors use physical objects as triggers, have only recently been identified, and are qualitatively different enough to resist all defenses targeting digital trigger backdoors.
Research on physical backdoors is limited by access to large datasets containing real images of physical objects co-located with targets of classification.
We propose a method to scalably identify these subsets of potential triggers in existing datasets, along with the specific classes they can poison.
arXiv Detail & Related papers (2022-06-21T18:52:25Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Robust Backdoor Attacks against Deep Neural Networks in Real Physical
World [6.622414121450076]
Deep neural networks (DNN) have been widely deployed in various practical applications.
Almost all the existing backdoor works focused on the digital domain, while few studies investigate the backdoor attacks in real physical world.
We propose a robust physical backdoor attack method, PTB, to implement the backdoor attacks against deep learning models in the physical world.
arXiv Detail & Related papers (2021-04-15T11:51:14Z) - Backdoor Learning: A Survey [75.59571756777342]
Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs)
Backdoor learning is an emerging and rapidly growing research area.
This paper presents the first comprehensive survey of this realm.
arXiv Detail & Related papers (2020-07-17T04:09:20Z) - Backdoor Attacks Against Deep Learning Systems in the Physical World [23.14528973663843]
We study the feasibility of physical backdoor attacks under a variety of real-world conditions.
Physical backdoor attacks can be highly successful if they are carefully configured to overcome the constraints imposed by physical objects.
Four of today's state-of-the-art defenses against (digital) backdoors are ineffective against physical backdoors.
arXiv Detail & Related papers (2020-06-25T17:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.