Backdoor Smoothing: Demystifying Backdoor Attacks on Deep Neural
Networks
- URL: http://arxiv.org/abs/2006.06721v4
- Date: Tue, 2 Nov 2021 11:24:11 GMT
- Title: Backdoor Smoothing: Demystifying Backdoor Attacks on Deep Neural
Networks
- Authors: Kathrin Grosse, Taesung Lee, Battista Biggio, Youngja Park, Michael
Backes, Ian Molloy
- Abstract summary: We show that backdoor attacks induce a smoother decision function around the triggered samples -- a phenomenon which we refer to as textitbackdoor smoothing.
Our experiments show that smoothness increases when the trigger is added to the input samples, and that this phenomenon is more pronounced for more successful attacks.
- Score: 25.23881974235643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Backdoor attacks mislead machine-learning models to output an
attacker-specified class when presented a specific trigger at test time. These
attacks require poisoning the training data to compromise the learning
algorithm, e.g., by injecting poisoning samples containing the trigger into the
training set, along with the desired class label. Despite the increasing number
of studies on backdoor attacks and defenses, the underlying factors affecting
the success of backdoor attacks, along with their impact on the learning
algorithm, are not yet well understood. In this work, we aim to shed light on
this issue by unveiling that backdoor attacks induce a smoother decision
function around the triggered samples -- a phenomenon which we refer to as
\textit{backdoor smoothing}. To quantify backdoor smoothing, we define a
measure that evaluates the uncertainty associated to the predictions of a
classifier around the input samples.
Our experiments show that smoothness increases when the trigger is added to
the input samples, and that this phenomenon is more pronounced for more
successful attacks.
We also provide preliminary evidence that backdoor triggers are not the only
smoothing-inducing patterns, but that also other artificial patterns can be
detected by our approach, paving the way towards understanding the limitations
of current defenses and designing novel ones.
Related papers
- Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning
System [4.9233610638625604]
We propose a novel black-box backdoor attack based on machine unlearning.
The attacker first augments the training set with carefully designed samples, including poison and mitigation data, to train a benign' model.
Then, the attacker posts unlearning requests for the mitigation samples to remove the impact of relevant data on the model, gradually activating the hidden backdoor.
arXiv Detail & Related papers (2023-09-12T02:42:39Z) - Rethinking Backdoor Attacks [122.1008188058615]
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.
Defending against such attacks typically involves viewing these inserted examples as outliers in the training set and using techniques from robust statistics to detect and remove them.
We show that without structural information about the training data distribution, backdoor attacks are indistinguishable from naturally-occurring features in the data.
arXiv Detail & Related papers (2023-07-19T17:44:54Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - PiDAn: A Coherence Optimization Approach for Backdoor Attack Detection
and Mitigation in Deep Neural Networks [22.900501880865658]
Backdoor attacks impose a new threat in Deep Neural Networks (DNNs)
We propose PiDAn, an algorithm based on coherence optimization purifying the poisoned data.
Our PiDAn algorithm can detect more than 90% infected classes and identify 95% poisoned samples.
arXiv Detail & Related papers (2022-03-17T12:37:21Z) - Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence
Functions [26.143147923356626]
We study the process of backdoor learning under the lens of incremental learning and influence functions.
We show that the success of backdoor attacks inherently depends on (i) the complexity of the learning algorithm and (ii) the fraction of backdoor samples injected into the training set.
arXiv Detail & Related papers (2021-06-14T08:00:48Z) - Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective [10.03897682559064]
This paper revisits existing backdoor triggers from a frequency perspective and performs a comprehensive analysis.
We show that many current backdoor attacks exhibit severe high-frequency artifacts, which persist across different datasets and resolutions.
We propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability.
arXiv Detail & Related papers (2021-04-07T22:05:28Z) - Rethinking the Trigger of Backdoor Attack [83.98031510668619]
Currently, most of existing backdoor attacks adopted the setting of emphstatic trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area.
We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2020-04-09T17:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.