Adaptive Perturbation Generation for Multiple Backdoors Detection
- URL: http://arxiv.org/abs/2209.05244v2
- Date: Tue, 13 Sep 2022 06:40:24 GMT
- Title: Adaptive Perturbation Generation for Multiple Backdoors Detection
- Authors: Yuhang Wang, Huafeng Shi, Rui Min, Ruijia Wu, Siyuan Liang, Yichao Wu,
Ding Liang and Aishan Liu
- Abstract summary: This paper proposes the Adaptive Perturbation Generation (APG) framework to detect multiple types of backdoor attacks.
We first design the global-to-local strategy to fit the multiple types of backdoor triggers.
To further increase the efficiency of perturbation injection, we introduce a gradient-guided mask generation strategy.
- Score: 29.01715186371785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extensive evidence has demonstrated that deep neural networks (DNNs) are
vulnerable to backdoor attacks, which motivates the development of backdoor
detection methods. Existing backdoor detection methods are typically tailored
for backdoor attacks with individual specific types (e.g., patch-based or
perturbation-based). However, adversaries are likely to generate multiple types
of backdoor attacks in practice, which challenges the current detection
strategies. Based on the fact that adversarial perturbations are highly
correlated with trigger patterns, this paper proposes the Adaptive Perturbation
Generation (APG) framework to detect multiple types of backdoor attacks by
adaptively injecting adversarial perturbations. Since different trigger
patterns turn out to show highly diverse behaviors under the same adversarial
perturbations, we first design the global-to-local strategy to fit the multiple
types of backdoor triggers via adjusting the region and budget of attacks. To
further increase the efficiency of perturbation injection, we introduce a
gradient-guided mask generation strategy to search for the optimal regions for
adversarial attacks. Extensive experiments conducted on multiple datasets
(CIFAR-10, GTSRB, Tiny-ImageNet) demonstrate that our method outperforms
state-of-the-art baselines by large margins(+12%).
Related papers
- SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency [20.61046457594186]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
This paper proposes a simple yet effective input-level backdoor detection (dubbed IBD-PSC) to filter out malicious testing images.
arXiv Detail & Related papers (2024-05-16T03:19:52Z) - LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning [49.174341192722615]
Backdoor attack poses a significant security threat to Deep Learning applications.
Recent papers have introduced attacks using sample-specific invisible triggers crafted through special transformation functions.
We introduce a novel backdoor attack LOTUS to address both evasiveness and resilience.
arXiv Detail & Related papers (2024-03-25T21:01:29Z) - Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery
Detection [62.595450266262645]
This paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.
By embedding backdoors into models, attackers can deceive detectors into producing erroneous predictions for forged faces.
We propose emphPoisoned Forgery Face framework, which enables clean-label backdoor attacks on face forgery detectors.
arXiv Detail & Related papers (2024-02-18T06:31:05Z) - Backdoor Attack against One-Class Sequential Anomaly Detection Models [10.020488631167204]
We explore compromising deep sequential anomaly detection models by proposing a novel backdoor attack strategy.
The attack approach comprises two primary steps, trigger generation and backdoor injection.
Experiments demonstrate the effectiveness of our proposed attack strategy by injecting backdoors on two well-established one-class anomaly detection models.
arXiv Detail & Related papers (2024-02-15T19:19:54Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Temporal-Distributed Backdoor Attack Against Video Based Action
Recognition [21.916002204426853]
We introduce a simple yet effective backdoor attack against video data.
Our proposed attack, adding perturbations in a transformed domain, plants an imperceptible, temporally distributed trigger across the video frames.
arXiv Detail & Related papers (2023-08-21T22:31:54Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.