Data-free Backdoor Removal based on Channel Lipschitzness
- URL: http://arxiv.org/abs/2208.03111v1
- Date: Fri, 5 Aug 2022 11:46:22 GMT
- Title: Data-free Backdoor Removal based on Channel Lipschitzness
- Authors: Runkai Zheng, Rongjun Tang, Jianze Li, Li Liu
- Abstract summary: Recent studies have shown that Deep Neural Networks (DNNs) are vulnerable to the backdoor attacks.
In this work, we introduce a novel concept called Channel Lipschitz Constant (CLC), which is defined as the Lipschitz constant of the mapping from the input images to the output of each channel.
Since UCLC can be directly calculated from the weight matrices, we can detect the potential backdoor channels in a data-free manner.
- Score: 8.273169655380896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have shown that Deep Neural Networks (DNNs) are vulnerable to
the backdoor attacks, which leads to malicious behaviors of DNNs when specific
triggers are attached to the input images. It was further demonstrated that the
infected DNNs possess a collection of channels, which are more sensitive to the
backdoor triggers compared with normal channels. Pruning these channels was
then shown to be effective in mitigating the backdoor behaviors. To locate
those channels, it is natural to consider their Lipschitzness, which measures
their sensitivity against worst-case perturbations on the inputs. In this work,
we introduce a novel concept called Channel Lipschitz Constant (CLC), which is
defined as the Lipschitz constant of the mapping from the input images to the
output of each channel. Then we provide empirical evidences to show the strong
correlation between an Upper bound of the CLC (UCLC) and the trigger-activated
change on the channel activation. Since UCLC can be directly calculated from
the weight matrices, we can detect the potential backdoor channels in a
data-free manner, and do simple pruning on the infected DNN to repair the
model. The proposed Channel Lipschitzness based Pruning (CLP) method is super
fast, simple, data-free and robust to the choice of the pruning threshold.
Extensive experiments are conducted to evaluate the efficiency and
effectiveness of CLP, which achieves state-of-the-art results among the
mainstream defense methods even without any data. Source codes are available at
https://github.com/rkteddy/channel-Lipschitzness-based-pruning.
Related papers
- Model Surgery: Modulating LLM's Behavior Via Simple Parameter Editing [63.20133320524577]
Large Language Models (LLMs) have demonstrated great potential as generalist assistants.
It is crucial that these models exhibit desirable behavioral traits, such as non-toxicity and resilience against jailbreak attempts.
In this paper, we observe that directly editing a small subset of parameters can effectively modulate specific behaviors of LLMs.
arXiv Detail & Related papers (2024-07-11T17:52:03Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Reconstructive Neuron Pruning for Backdoor Defense [96.21882565556072]
We propose a novel defense called emphReconstructive Neuron Pruning (RNP) to expose and prune backdoor neurons.
In RNP, unlearning is operated at the neuron level while recovering is operated at the filter level, forming an asymmetric reconstructive learning procedure.
We show that such an asymmetric process on only a few clean samples can effectively expose and prune the backdoor neurons implanted by a wide range of attacks.
arXiv Detail & Related papers (2023-05-24T08:29:30Z) - Backdoor Defense via Suppressing Model Shortcuts [91.30995749139012]
In this paper, we explore the backdoor mechanism from the angle of the model structure.
We demonstrate that the attack success rate (ASR) decreases significantly when reducing the outputs of some key skip connections.
arXiv Detail & Related papers (2022-11-02T15:39:19Z) - An Adaptive Black-box Backdoor Detection Method for Deep Neural Networks [25.593824693347113]
Deep Neural Networks (DNNs) have demonstrated unprecedented performance across various fields such as medical diagnosis and autonomous driving.
They are identified to be vulnerable to Neural Trojan (NT) attacks that are controlled and activated by stealthy triggers.
We propose a robust and adaptive Trojan detection scheme that inspects whether a pre-trained model has been Trojaned before its deployment.
arXiv Detail & Related papers (2022-04-08T23:41:19Z) - Chordal Sparsity for Lipschitz Constant Estimation of Deep Neural
Networks [77.82638674792292]
Lipschitz constants of neural networks allow for guarantees of robustness in image classification, safety in controller design, and generalizability beyond the training data.
As calculating Lipschitz constants is NP-hard, techniques for estimating Lipschitz constants must navigate the trade-off between scalability and accuracy.
In this work, we significantly push the scalability frontier of a semidefinite programming technique known as LipSDP while achieving zero accuracy loss.
arXiv Detail & Related papers (2022-04-02T11:57:52Z) - Adversarial Neuron Pruning Purifies Backdoored Deep Models [24.002034537777526]
Adrial Neuron Pruning (ANP) effectively removes the injected backdoor without causing obvious performance degradation.
We propose a novel model repairing method, termed Adrial Neuron Pruning (ANP), which prunes some sensitive neurons to purify the injected backdoor.
arXiv Detail & Related papers (2021-10-27T13:41:53Z) - GDP: Stabilized Neural Network Pruning via Gates with Differentiable
Polarization [84.57695474130273]
Gate-based or importance-based pruning methods aim to remove channels whose importance is smallest.
GDP can be plugged before convolutional layers without bells and whistles, to control the on-and-off of each channel.
Experiments conducted over CIFAR-10 and ImageNet datasets show that the proposed GDP achieves the state-of-the-art performance.
arXiv Detail & Related papers (2021-09-06T03:17:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.