Adversarial Feature Map Pruning for Backdoor
- URL: http://arxiv.org/abs/2307.11565v2
- Date: Fri, 23 Feb 2024 12:42:24 GMT
- Title: Adversarial Feature Map Pruning for Backdoor
- Authors: Dong Huang, Qingwen Bu
- Abstract summary: We propose Adversarial Feature Map Pruning for Backdoor (FMP) to mitigate backdoor attacks.
FMP attempts to prune backdoor feature maps, which are trained to extract backdoor information from inputs.
Our experiments demonstrate that, compared to existing defense strategies, FMP can effectively reduce the Attack Success Rate (ASR) even against the most complex and invisible attack triggers.
- Score: 4.550555443103878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have been widely used in many critical applications,
such as autonomous vehicles and medical diagnosis. However, their security is
threatened by backdoor attacks, which are achieved by adding artificial
patterns to specific training data. Existing defense strategies primarily focus
on using reverse engineering to reproduce the backdoor trigger generated by
attackers and subsequently repair the DNN model by adding the trigger into
inputs and fine-tuning the model with ground-truth labels. However, once the
trigger generated by the attackers is complex and invisible, the defender
cannot reproduce the trigger successfully then the DNN model will not be
repaired, as the trigger is not effectively removed.
In this work, we propose Adversarial Feature Map Pruning for Backdoor (FMP)
to mitigate backdoor from the DNN. Unlike existing defense strategies, which
focus on reproducing backdoor triggers, FMP attempts to prune backdoor feature
maps, which are trained to extract backdoor information from inputs. After
pruning these backdoor feature maps, FMP will fine-tune the model with a secure
subset of training data. Our experiments demonstrate that, compared to existing
defense strategies, FMP can effectively reduce the Attack Success Rate (ASR)
even against the most complex and invisible attack triggers (e.g., FMP
decreases the ASR to 2.86\% in CIFAR10, which is 19.2\% to 65.41\% lower than
baselines). Second, unlike conventional defense methods that tend to exhibit
low robust accuracy (that is, the accuracy of the model on poisoned data), FMP
achieves a higher RA, indicating its superiority in maintaining model
performance while mitigating the effects of backdoor attacks (e.g., FMP obtains
87.40\% RA in CIFAR10). Our code is publicly available at:
https://github.com/retsuh-bqw/FMP.
Related papers
- T2IShield: Defending Against Backdoors on Text-to-Image Diffusion Models [70.03122709795122]
We propose a comprehensive defense method named T2IShield to detect, localize, and mitigate backdoor attacks.
We find the "Assimilation Phenomenon" on the cross-attention maps caused by the backdoor trigger.
For backdoor sample detection, T2IShield achieves a detection F1 score of 88.9$%$ with low computational cost.
arXiv Detail & Related papers (2024-07-05T01:53:21Z) - TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation Models [69.37990698561299]
TrojFM is a novel backdoor attack tailored for very large foundation models.
Our approach injects backdoors by fine-tuning only a very small proportion of model parameters.
We demonstrate that TrojFM can launch effective backdoor attacks against widely used large GPT-style models.
arXiv Detail & Related papers (2024-05-27T03:10:57Z) - Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor [63.84477483795964]
Data-poisoning backdoor attacks are serious security threats to machine learning models.
In this paper, we focus on in-training backdoor defense, aiming to train a clean model even when the dataset may be potentially poisoned.
We propose a novel defense approach called PDB (Proactive Defensive Backdoor)
arXiv Detail & Related papers (2024-05-25T07:52:26Z) - Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning [20.69655306650485]
Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data.
Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks.
We propose DPOT, a backdoor attack strategy in FL that dynamically constructs backdoor objectives by optimizing a backdoor trigger.
arXiv Detail & Related papers (2024-05-10T02:44:25Z) - Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning [21.600003684064706]
This paper designs a backdoor attack method based on federated learning.
aiming at the concealment of the backdoor trigger, a TrojanGan steganography model with encoder-decoder structure is designed.
A dual model replacement backdoor attack algorithm based on federated learning is designed.
arXiv Detail & Related papers (2024-04-22T07:44:02Z) - Elijah: Eliminating Backdoors Injected in Diffusion Models via
Distribution Shift [86.92048184556936]
We propose the first backdoor detection and removal framework for DMs.
We evaluate our framework Elijah on hundreds of DMs of 3 types including DDPM, NCSN and LDM.
Our approach can have close to 100% detection accuracy and reduce the backdoor effects to close to zero without significantly sacrificing the model utility.
arXiv Detail & Related papers (2023-11-27T23:58:56Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Backdoor Defense via Deconfounded Representation Learning [17.28760299048368]
We propose a Causality-inspired Backdoor Defense (CBD) to learn deconfounded representations for reliable classification.
CBD is effective in reducing backdoor threats while maintaining high accuracy in predicting benign samples.
arXiv Detail & Related papers (2023-03-13T02:25:59Z) - Model-Contrastive Learning for Backdoor Defense [13.781375023320981]
We propose a novel backdoor defense method named MCL based on model-contrastive learning.
MCL is more effective for reducing backdoor threats while maintaining higher accuracy of benign data.
arXiv Detail & Related papers (2022-05-09T16:36:46Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.