Defending Against Backdoor Attacks by Layer-wise Feature Analysis
- URL: http://arxiv.org/abs/2302.12758v1
- Date: Fri, 24 Feb 2023 17:16:37 GMT
- Title: Defending Against Backdoor Attacks by Layer-wise Feature Analysis
- Authors: Najeeb Moharram Jebreel, Josep Domingo-Ferrer, Yiming Li
- Abstract summary: Training deep neural networks (DNNs) usually requires massive training data and computational resources.
A new training-time attack (i.e., backdoor attack) aims to induce misclassification of input samples containing adversary-specified trigger patterns.
We propose a simple yet effective method to filter poisoned samples by analyzing the feature differences between suspicious and benign samples at the critical layer.
- Score: 11.465401472704732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training deep neural networks (DNNs) usually requires massive training data
and computational resources. Users who cannot afford this may prefer to
outsource training to a third party or resort to publicly available pre-trained
models. Unfortunately, doing so facilitates a new training-time attack (i.e.,
backdoor attack) against DNNs. This attack aims to induce misclassification of
input samples containing adversary-specified trigger patterns. In this paper,
we first conduct a layer-wise feature analysis of poisoned and benign samples
from the target class. We find out that the feature difference between benign
and poisoned samples tends to be maximum at a critical layer, which is not
always the one typically used in existing defenses, namely the layer before
fully-connected layers. We also demonstrate how to locate this critical layer
based on the behaviors of benign samples. We then propose a simple yet
effective method to filter poisoned samples by analyzing the feature
differences between suspicious and benign samples at the critical layer. We
conduct extensive experiments on two benchmark datasets, which confirm the
effectiveness of our defense.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.