DBIA: Data-free Backdoor Injection Attack against Transformer Networks
- URL: http://arxiv.org/abs/2111.11870v1
- Date: Mon, 22 Nov 2021 08:13:51 GMT
- Title: DBIA: Data-free Backdoor Injection Attack against Transformer Networks
- Authors: Peizhuo Lv, Hualong Ma, Jiachen Zhou, Ruigang Liang, Kai Chen,
Shengzhi Zhang, Yunfei Yang
- Abstract summary: We propose DBIA, a data-free backdoor attack against the CV-oriented transformer networks.
Our approach can embed backdoors with a high success rate and a low impact on the performance of the victim transformers.
- Score: 6.969019759456717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, transformer architecture has demonstrated its significance in both
Natural Language Processing (NLP) and Computer Vision (CV) tasks. Though other
network models are known to be vulnerable to the backdoor attack, which embeds
triggers in the model and controls the model behavior when the triggers are
presented, little is known whether such an attack is still valid on the
transformer models and if so, whether it can be done in a more cost-efficient
manner. In this paper, we propose DBIA, a novel data-free backdoor attack
against the CV-oriented transformer networks, leveraging the inherent attention
mechanism of transformers to generate triggers and injecting the backdoor using
the poisoned surrogate dataset. We conducted extensive experiments based on
three benchmark transformers, i.e., ViT, DeiT and Swin Transformer, on two
mainstream image classification tasks, i.e., CIFAR10 and ImageNet. The
evaluation results demonstrate that, consuming fewer resources, our approach
can embed backdoors with a high success rate and a low impact on the
performance of the victim transformers. Our code is available at
https://anonymous.4open.science/r/DBIA-825D.
Related papers
- Adversarial Robustness of In-Context Learning in Transformers for Linear Regression [23.737606860443705]
This work investigates the vulnerability of in-context learning in transformers to textithijacking attacks focusing on the setting of linear regression tasks.
We first prove that single-layer linear transformers, known to implement gradient descent in-context, are non-robust and can be manipulated to output arbitrary predictions.
We then demonstrate that adversarial training enhances transformers' robustness against hijacking attacks, even when just applied during finetuning.
arXiv Detail & Related papers (2024-11-07T21:25:58Z) - Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - Reproducibility Study on Adversarial Attacks Against Robust Transformer Trackers [18.615714086028632]
New transformer networks have been integrated into object tracking pipelines and have demonstrated strong performance on the latest benchmarks.
This paper focuses on understanding how transformer trackers behave under adversarial attacks and how different attacks perform on tracking datasets as their parameters change.
arXiv Detail & Related papers (2024-06-03T20:13:38Z) - Attention Deficit is Ordered! Fooling Deformable Vision Transformers
with Collaborative Adversarial Patches [3.4673556247932225]
Deformable vision transformers significantly reduce the complexity of attention modeling.
Recent work has demonstrated adversarial attacks against conventional vision transformers.
We develop new collaborative attacks where a source patch manipulates attention to point to a target patch, which contains the adversarial noise to fool the model.
arXiv Detail & Related papers (2023-11-21T17:55:46Z) - Tabdoor: Backdoor Vulnerabilities in Transformer-based Neural Networks for Tabular Data [14.415796842972563]
We present a comprehensive analysis of backdoor attacks on tabular data using Deep Neural Networks (DNNs)
We propose a novel approach for trigger construction: an in-bounds attack, which provides excellent attack performance while maintaining stealthiness.
Our results demonstrate up to 100% attack success rate with negligible clean accuracy drop.
arXiv Detail & Related papers (2023-11-13T18:39:44Z) - The Efficacy of Transformer-based Adversarial Attacks in Security
Domains [0.7156877824959499]
We evaluate the robustness of transformers to adversarial samples for system defenders and their adversarial strength for system attackers.
Our work emphasizes the importance of studying transformer architectures for attacking and defending models in security domains.
arXiv Detail & Related papers (2023-10-17T21:45:23Z) - Emergent Agentic Transformer from Chain of Hindsight Experience [96.56164427726203]
We show that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
This is the first time that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
arXiv Detail & Related papers (2023-05-26T00:43:02Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Imperceptible and Robust Backdoor Attack in 3D Point Cloud [62.992167285646275]
We propose a novel imperceptible and robust backdoor attack (IRBA) to tackle this challenge.
We utilize a nonlinear and local transformation, called weighted local transformation (WLT), to construct poisoned samples with unique transformations.
Experiments on three benchmark datasets and four models show that IRBA achieves 80%+ ASR in most cases even with pre-processing techniques.
arXiv Detail & Related papers (2022-08-17T03:53:10Z) - The Nuts and Bolts of Adopting Transformer in GANs [124.30856952272913]
We investigate the properties of Transformer in the generative adversarial network (GAN) framework for high-fidelity image synthesis.
Our study leads to a new alternative design of Transformers in GAN, a convolutional neural network (CNN)-free generator termed as STrans-G.
arXiv Detail & Related papers (2021-10-25T17:01:29Z) - Spatiotemporal Transformer for Video-based Person Re-identification [102.58619642363958]
We show that, despite the strong learning ability, the vanilla Transformer suffers from an increased risk of over-fitting.
We propose a novel pipeline where the model is pre-trained on a set of synthesized video data and then transferred to the downstream domains.
The derived algorithm achieves significant accuracy gain on three popular video-based person re-identification benchmarks.
arXiv Detail & Related papers (2021-03-30T16:19:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.