Multi-Trigger Backdoor Attacks: More Triggers, More Threats
- URL: http://arxiv.org/abs/2401.15295v1
- Date: Sat, 27 Jan 2024 04:49:37 GMT
- Title: Multi-Trigger Backdoor Attacks: More Triggers, More Threats
- Authors: Yige Li, Xingjun Ma, Jiabo He, Hanxun Huang, Yu-Gang Jiang
- Abstract summary: We investigate the practical threat of backdoor attacks under the setting of textbfmulti-trigger attacks
By proposing and investigating three types of multi-trigger attacks, we provide a set of important understandings of the coexisting, overwriting, and cross-activating effects between different triggers on the same dataset.
We create a multi-trigger backdoor poisoning dataset to help future evaluation of backdoor attacks and defenses.
- Score: 71.08081471803915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Backdoor attacks have emerged as a primary threat to (pre-)training and
deployment of deep neural networks (DNNs). While backdoor attacks have been
extensively studied in a body of works, most of them were focused on
single-trigger attacks that poison a dataset using a single type of trigger.
Arguably, real-world backdoor attacks can be much more complex, e.g., the
existence of multiple adversaries for the same dataset if it is of high value.
In this work, we investigate the practical threat of backdoor attacks under the
setting of \textbf{multi-trigger attacks} where multiple adversaries leverage
different types of triggers to poison the same dataset. By proposing and
investigating three types of multi-trigger attacks, including parallel,
sequential, and hybrid attacks, we provide a set of important understandings of
the coexisting, overwriting, and cross-activating effects between different
triggers on the same dataset. Moreover, we show that single-trigger attacks
tend to cause overly optimistic views of the security of current defense
techniques, as all examined defense methods struggle to defend against
multi-trigger attacks. Finally, we create a multi-trigger backdoor poisoning
dataset to help future evaluation of backdoor attacks and defenses. Although
our work is purely empirical, we hope it can help steer backdoor research
toward more realistic settings.
Related papers
- Non-Cooperative Backdoor Attacks in Federated Learning: A New Threat Landscape [7.00762739959285]
Federated Learning (FL) for privacy-preserving model training remains susceptible to backdoor attacks.
This research emphasizes the critical need for robust defenses against diverse backdoor attacks in the evolving FL landscape.
arXiv Detail & Related papers (2024-07-05T22:03:13Z) - Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning [21.600003684064706]
This paper designs a backdoor attack method based on federated learning.
aiming at the concealment of the backdoor trigger, a TrojanGan steganography model with encoder-decoder structure is designed.
A dual model replacement backdoor attack algorithm based on federated learning is designed.
arXiv Detail & Related papers (2024-04-22T07:44:02Z) - From Shortcuts to Triggers: Backdoor Defense with Denoised PoE [51.287157951953226]
Language models are often at risk of diverse backdoor attacks, especially data poisoning.
Existing backdoor defense methods mainly focus on backdoor attacks with explicit triggers.
We propose an end-to-end ensemble-based backdoor defense framework, DPoE, to defend various backdoor attacks.
arXiv Detail & Related papers (2023-05-24T08:59:25Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Dual-Key Multimodal Backdoors for Visual Question Answering [26.988750557552983]
We show that multimodal networks are vulnerable to a novel type of attack that we refer to as Dual-Key Multimodal Backdoors.
This attack exploits the complex fusion mechanisms used by state-of-the-art networks to embed backdoors that are both effective and stealthy.
We present an extensive study of multimodal backdoors on the Visual Question Answering (VQA) task with multiple architectures and visual feature backbones.
arXiv Detail & Related papers (2021-12-14T18:59:52Z) - Poison Ink: Robust and Invisible Backdoor Attack [122.49388230821654]
We propose a robust and invisible backdoor attack called Poison Ink''
Concretely, we first leverage the image structures as target poisoning areas, and fill them with poison ink (information) to generate the trigger pattern.
Compared to existing popular backdoor attack methods, Poison Ink outperforms both in stealthiness and robustness.
arXiv Detail & Related papers (2021-08-05T09:52:49Z) - Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger [48.59965356276387]
We propose to use syntactic structure as the trigger in textual backdoor attacks.
We conduct extensive experiments to demonstrate that the trigger-based attack method can achieve comparable attack performance.
These results also reveal the significant insidiousness and harmfulness of textual backdoor attacks.
arXiv Detail & Related papers (2021-05-26T08:54:19Z) - Deep Feature Space Trojan Attack of Neural Networks by Controlled
Detoxification [21.631699720855995]
Trojan (backdoor) attack is a form of adversarial attack on deep neural networks.
We propose a novel deep feature space trojan attack with five characteristics.
arXiv Detail & Related papers (2020-12-21T09:46:12Z) - Rethinking the Trigger of Backdoor Attack [83.98031510668619]
Currently, most of existing backdoor attacks adopted the setting of emphstatic trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area.
We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2020-04-09T17:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.