Natural Backdoor Datasets
- URL: http://arxiv.org/abs/2206.10673v1
- Date: Tue, 21 Jun 2022 18:52:25 GMT
- Title: Natural Backdoor Datasets
- Authors: Emily Wenger, Roma Bhattacharjee, Arjun Nitin Bhagoji, Josephine
Passananti, Emilio Andere, Haitao Zheng, Ben Y. Zhao
- Abstract summary: Physical backdoors use physical objects as triggers, have only recently been identified, and are qualitatively different enough to resist all defenses targeting digital trigger backdoors.
Research on physical backdoors is limited by access to large datasets containing real images of physical objects co-located with targets of classification.
We propose a method to scalably identify these subsets of potential triggers in existing datasets, along with the specific classes they can poison.
- Score: 27.406510934213387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Extensive literature on backdoor poison attacks has studied attacks and
defenses for backdoors using "digital trigger patterns." In contrast, "physical
backdoors" use physical objects as triggers, have only recently been
identified, and are qualitatively different enough to resist all defenses
targeting digital trigger backdoors. Research on physical backdoors is limited
by access to large datasets containing real images of physical objects
co-located with targets of classification. Building these datasets is time- and
labor-intensive. This works seeks to address the challenge of accessibility for
research on physical backdoor attacks. We hypothesize that there may be
naturally occurring physically co-located objects already present in popular
datasets such as ImageNet. Once identified, a careful relabeling of these data
can transform them into training samples for physical backdoor attacks. We
propose a method to scalably identify these subsets of potential triggers in
existing datasets, along with the specific classes they can poison. We call
these naturally occurring trigger-class subsets natural backdoor datasets. Our
techniques successfully identify natural backdoors in widely-available
datasets, and produce models behaviorally equivalent to those trained on
manually curated datasets. We release our code to allow the research community
to create their own datasets for research on physical backdoor attacks.
Related papers
- Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models [18.538988264963645]
This paper unleashes a recipe that empowers backdoor researchers to create a malicious, physical backdoor dataset.
It effectively mitigates the perceived complexity associated with creating a physical backdoor dataset.
Experiment results show that datasets created by our "recipe" enable adversaries to achieve an impressive attack success rate.
arXiv Detail & Related papers (2023-12-06T11:05:11Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
Trained from Scratch [99.90716010490625]
Backdoor attackers tamper with training data to embed a vulnerability in models that are trained on that data.
This vulnerability is then activated at inference time by placing a "trigger" into the model's input.
We develop a new hidden trigger attack, Sleeper Agent, which employs gradient matching, data selection, and target model re-training during the crafting process.
arXiv Detail & Related papers (2021-06-16T17:09:55Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - Backdoor Learning: A Survey [75.59571756777342]
Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs)
Backdoor learning is an emerging and rapidly growing research area.
This paper presents the first comprehensive survey of this realm.
arXiv Detail & Related papers (2020-07-17T04:09:20Z) - Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks [46.99548490594115]
A backdoor attack installs a backdoor into the victim model by injecting a backdoor pattern into a small proportion of the training data.
We propose reflection backdoor (Refool) to plant reflections as backdoor into a victim model.
We demonstrate on 3 computer vision tasks and 5 datasets that, Refool can attack state-of-the-art DNNs with high success rate.
arXiv Detail & Related papers (2020-07-05T13:56:48Z) - Backdoor Attacks Against Deep Learning Systems in the Physical World [23.14528973663843]
We study the feasibility of physical backdoor attacks under a variety of real-world conditions.
Physical backdoor attacks can be highly successful if they are carefully configured to overcome the constraints imposed by physical objects.
Four of today's state-of-the-art defenses against (digital) backdoors are ineffective against physical backdoors.
arXiv Detail & Related papers (2020-06-25T17:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.