Towards Stable Backdoor Purification through Feature Shift Tuning
- URL: http://arxiv.org/abs/2310.01875v3
- Date: Sat, 21 Oct 2023 12:37:05 GMT
- Title: Towards Stable Backdoor Purification through Feature Shift Tuning
- Authors: Rui Min, Zeyu Qin, Li Shen, Minhao Cheng
- Abstract summary: Deep neural networks (DNN) are vulnerable to backdoor attacks.
In this paper, we start with fine-tuning, one of the most common and easy-to-deploy backdoor defenses.
We introduce Feature Shift Tuning (FST), a method for tuning-based backdoor purification.
- Score: 22.529990213795216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It has been widely observed that deep neural networks (DNN) are vulnerable to
backdoor attacks where attackers could manipulate the model behavior
maliciously by tampering with a small set of training samples. Although a line
of defense methods is proposed to mitigate this threat, they either require
complicated modifications to the training process or heavily rely on the
specific model architecture, which makes them hard to deploy into real-world
applications. Therefore, in this paper, we instead start with fine-tuning, one
of the most common and easy-to-deploy backdoor defenses, through comprehensive
evaluations against diverse attack scenarios. Observations made through initial
experiments show that in contrast to the promising defensive results on high
poisoning rates, vanilla tuning methods completely fail at low poisoning rate
scenarios. Our analysis shows that with the low poisoning rate, the
entanglement between backdoor and clean features undermines the effect of
tuning-based defenses. Therefore, it is necessary to disentangle the backdoor
and clean features in order to improve backdoor purification. To address this,
we introduce Feature Shift Tuning (FST), a method for tuning-based backdoor
purification. Specifically, FST encourages feature shifts by actively deviating
the classifier weights from the originally compromised weights. Extensive
experiments demonstrate that our FST provides consistently stable performance
under different attack settings. Without complex parameter adjustments, FST
also achieves much lower tuning costs, only 10 epochs. Our codes are available
at https://github.com/AISafety-HKUST/stable_backdoor_purification.
Related papers
- Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning [57.50274256088251]
We show that parameter-efficient fine-tuning (PEFT) is more susceptible to weight-poisoning backdoor attacks.
We develop a Poisoned Sample Identification Module (PSIM) leveraging PEFT, which identifies poisoned samples through confidence.
We conduct experiments on text classification tasks, five fine-tuning strategies, and three weight-poisoning backdoor attack methods.
arXiv Detail & Related papers (2024-02-19T14:22:54Z) - TransTroj: Transferable Backdoor Attacks to Pre-trained Models via
Embedding Indistinguishability [65.21878718144663]
We propose a novel transferable backdoor attack, TransTroj, to simultaneously meet functionality-preserving, durable, and task-agnostic.
Experimental results show that TransTroj significantly outperforms SOTA task-agnostic backdoor attacks.
arXiv Detail & Related papers (2024-01-29T04:35:48Z) - Setting the Trap: Capturing and Defeating Backdoors in Pretrained
Language Models through Honeypots [68.84056762301329]
Recent research has exposed the susceptibility of pretrained language models (PLMs) to backdoor attacks.
We propose and integrate a honeypot module into the original PLM to absorb backdoor information exclusively.
Our design is motivated by the observation that lower-layer representations in PLMs carry sufficient backdoor features.
arXiv Detail & Related papers (2023-10-28T08:21:16Z) - Backdoor Mitigation by Correcting the Distribution of Neural Activations [30.554700057079867]
Backdoor (Trojan) attacks are an important type of adversarial exploit against deep neural networks (DNNs)
We analyze an important property of backdoor attacks: a successful attack causes an alteration in the distribution of internal layer activations for backdoor-trigger instances.
We propose an efficient and effective method that achieves post-training backdoor mitigation by correcting the distribution alteration.
arXiv Detail & Related papers (2023-08-18T22:52:29Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Backdoor Defense via Suppressing Model Shortcuts [91.30995749139012]
In this paper, we explore the backdoor mechanism from the angle of the model structure.
We demonstrate that the attack success rate (ASR) decreases significantly when reducing the outputs of some key skip connections.
arXiv Detail & Related papers (2022-11-02T15:39:19Z) - Adversarial Fine-tuning for Backdoor Defense: Connect Adversarial
Examples to Triggered Samples [15.57457705138278]
We propose a new Adversarial Fine-Tuning (AFT) approach to erase backdoor triggers.
AFT can effectively erase the backdoor triggers without obvious performance degradation on clean samples.
arXiv Detail & Related papers (2022-02-13T13:41:15Z) - RAP: Robustness-Aware Perturbations for Defending against Backdoor
Attacks on NLP Models [29.71136191379715]
We propose an efficient online defense mechanism based on robustness-aware perturbations.
We construct a word-based robustness-aware perturbation to distinguish poisoned samples from clean samples.
Our method achieves better defending performance and much lower computational costs than existing online defense methods.
arXiv Detail & Related papers (2021-10-15T03:09:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.