Distilling Cognitive Backdoor Patterns within an Image
- URL: http://arxiv.org/abs/2301.10908v4
- Date: Wed, 13 Sep 2023 06:11:12 GMT
- Title: Distilling Cognitive Backdoor Patterns within an Image
- Authors: Hanxun Huang, Xingjun Ma, Sarah Erfani, James Bailey
- Abstract summary: This paper proposes a simple method to distill and detect backdoor patterns within an image: emphCognitive Distillation (CD)
The extracted pattern can help understand the cognitive mechanism of a model on clean vs. backdoor images.
We conduct extensive experiments to show that CD can robustly detect a wide range of advanced backdoor attacks.
- Score: 35.1754797302114
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a simple method to distill and detect backdoor patterns
within an image: \emph{Cognitive Distillation} (CD). The idea is to extract the
"minimal essence" from an input image responsible for the model's prediction.
CD optimizes an input mask to extract a small pattern from the input image that
can lead to the same model output (i.e., logits or deep features). The
extracted pattern can help understand the cognitive mechanism of a model on
clean vs. backdoor images and is thus called a \emph{Cognitive Pattern} (CP).
Using CD and the distilled CPs, we uncover an interesting phenomenon of
backdoor attacks: despite the various forms and sizes of trigger patterns used
by different attacks, the CPs of backdoor samples are all surprisingly and
suspiciously small. One thus can leverage the learned mask to detect and remove
backdoor examples from poisoned training datasets. We conduct extensive
experiments to show that CD can robustly detect a wide range of advanced
backdoor attacks. We also show that CD can potentially be applied to help
detect potential biases from face datasets. Code is available at
\url{https://github.com/HanxunH/CognitiveDistillation}.
Related papers
- Backdoor Attack with Mode Mixture Latent Modification [26.720292228686446]
We propose a backdoor attack paradigm that only requires minimal alterations to a clean model in order to inject the backdoor under the guise of fine-tuning.
We evaluate the effectiveness of our method on four popular benchmark datasets.
arXiv Detail & Related papers (2024-03-12T09:59:34Z) - Model Pairing Using Embedding Translation for Backdoor Attack Detection on Open-Set Classification Tasks [63.269788236474234]
We propose to use model pairs on open-set classification tasks for detecting backdoors.
We show that this score, can be an indicator for the presence of a backdoor despite models being of different architectures.
This technique allows for the detection of backdoors on models designed for open-set classification tasks, which is little studied in the literature.
arXiv Detail & Related papers (2024-02-28T21:29:16Z) - One-to-Multiple Clean-Label Image Camouflage (OmClic) based Backdoor Attack on Deep Learning [15.118652632054392]
One attack/poisoned image can only fit a single input size of the DL model.
This work proposes to constructively craft an attack image through camouflaging but can fit multiple DL models' input sizes simultaneously.
Through OmClic, we are able to always implant a backdoor regardless of which common input size is chosen by the user.
arXiv Detail & Related papers (2023-09-07T22:13:14Z) - Backdoor Learning on Sequence to Sequence Models [94.23904400441957]
In this paper, we study whether sequence-to-sequence (seq2seq) models are vulnerable to backdoor attacks.
Specifically, we find by only injecting 0.2% samples of the dataset, we can cause the seq2seq model to generate the designated keyword and even the whole sentence.
Extensive experiments on machine translation and text summarization have been conducted to show our proposed methods could achieve over 90% attack success rate on multiple datasets and models.
arXiv Detail & Related papers (2023-05-03T20:31:13Z) - Mask and Restore: Blind Backdoor Defense at Test Time with Masked
Autoencoder [57.739693628523]
We propose a framework for blind backdoor defense with Masked AutoEncoder (BDMAE)
BDMAE detects possible triggers in the token space using image structural similarity and label consistency between the test image and MAE restorations.
Our approach is blind to the model restorations, trigger patterns and image benignity.
arXiv Detail & Related papers (2023-03-27T19:23:33Z) - Backdoor Defense via Deconfounded Representation Learning [17.28760299048368]
We propose a Causality-inspired Backdoor Defense (CBD) to learn deconfounded representations for reliable classification.
CBD is effective in reducing backdoor threats while maintaining high accuracy in predicting benign samples.
arXiv Detail & Related papers (2023-03-13T02:25:59Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.