Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective
- URL: http://arxiv.org/abs/2104.03413v1
- Date: Wed, 7 Apr 2021 22:05:28 GMT
- Title: Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective
- Authors: Yi Zeng, Won Park, Z. Morley Mao and Ruoxi Jia
- Abstract summary: This paper revisits existing backdoor triggers from a frequency perspective and performs a comprehensive analysis.
We show that many current backdoor attacks exhibit severe high-frequency artifacts, which persist across different datasets and resolutions.
We propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability.
- Score: 10.03897682559064
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Backdoor attacks have been considered a severe security threat to deep
learning. Such attacks can make models perform abnormally on inputs with
predefined triggers and still retain state-of-the-art performance on clean
data. While backdoor attacks have been thoroughly investigated in the image
domain from both attackers' and defenders' sides, an analysis in the frequency
domain has been missing thus far.
This paper first revisits existing backdoor triggers from a frequency
perspective and performs a comprehensive analysis. Our results show that many
current backdoor attacks exhibit severe high-frequency artifacts, which persist
across different datasets and resolutions. We further demonstrate these
high-frequency artifacts enable a simple way to detect existing backdoor
triggers at a detection rate of 98.50% without prior knowledge of the attack
details and the target model. Acknowledging previous attacks' weaknesses, we
propose a practical way to create smooth backdoor triggers without
high-frequency artifacts and study their detectability. We show that existing
defense works can benefit by incorporating these smooth triggers into their
design consideration. Moreover, we show that the detector tuned over stronger
smooth triggers can generalize well to unseen weak smooth triggers. In short,
our work emphasizes the importance of considering frequency analysis when
designing both backdoor attacks and defenses in deep learning.
Related papers
- Twin Trigger Generative Networks for Backdoor Attacks against Object Detection [14.578800906364414]
Object detectors, which are widely used in real-world applications, are vulnerable to backdoor attacks.
Most research on backdoor attacks has focused on image classification, with limited investigation into object detection.
We propose novel twin trigger generative networks to generate invisible triggers for implanting backdoors into models during training, and visible triggers for steady activation during inference.
arXiv Detail & Related papers (2024-11-23T03:46:45Z) - Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning [49.174341192722615]
Backdoor attack poses a significant security threat to Deep Learning applications.
Recent papers have introduced attacks using sample-specific invisible triggers crafted through special transformation functions.
We introduce a novel backdoor attack LOTUS to address both evasiveness and resilience.
arXiv Detail & Related papers (2024-03-25T21:01:29Z) - Rethinking Backdoor Attacks [122.1008188058615]
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.
Defending against such attacks typically involves viewing these inserted examples as outliers in the training set and using techniques from robust statistics to detect and remove them.
We show that without structural information about the training data distribution, backdoor attacks are indistinguishable from naturally-occurring features in the data.
arXiv Detail & Related papers (2023-07-19T17:44:54Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Understanding Impacts of Task Similarity on Backdoor Attack and
Detection [17.5277044179396]
We use similarity metrics in multi-task learning to define the backdoor distance (similarity) between the primary task and the backdoor task.
We then analyze existing stealthy backdoor attacks, revealing that most of them fail to effectively reduce the backdoor distance.
We then design a new method, called TSA attack, to automatically generate a backdoor model under a given distance constraint.
arXiv Detail & Related papers (2022-10-12T18:07:39Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - WaNet -- Imperceptible Warping-based Backdoor Attack [20.289889150949836]
A third-party model can be poisoned in training to work well in normal conditions but behave maliciously when a trigger pattern appears.
In this paper, we propose using warping-based triggers to attack third-party models.
The proposed backdoor outperforms the previous methods in a human inspection test by a wide margin, proving its stealthiness.
arXiv Detail & Related papers (2021-02-20T15:25:36Z) - Backdoor Smoothing: Demystifying Backdoor Attacks on Deep Neural
Networks [25.23881974235643]
We show that backdoor attacks induce a smoother decision function around the triggered samples -- a phenomenon which we refer to as textitbackdoor smoothing.
Our experiments show that smoothness increases when the trigger is added to the input samples, and that this phenomenon is more pronounced for more successful attacks.
arXiv Detail & Related papers (2020-06-11T18:28:54Z) - Rethinking the Trigger of Backdoor Attack [83.98031510668619]
Currently, most of existing backdoor attacks adopted the setting of emphstatic trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area.
We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2020-04-09T17:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.