Poison Ink: Robust and Invisible Backdoor Attack
- URL: http://arxiv.org/abs/2108.02488v1
- Date: Thu, 5 Aug 2021 09:52:49 GMT
- Title: Poison Ink: Robust and Invisible Backdoor Attack
- Authors: Jie zhang, Dongdong Chen, Jing Liao, Qidong Huang, Gang Hua, Weiming
Zhang, Nenghai Yu
- Abstract summary: We propose a robust and invisible backdoor attack called Poison Ink''
Concretely, we first leverage the image structures as target poisoning areas, and fill them with poison ink (information) to generate the trigger pattern.
Compared to existing popular backdoor attack methods, Poison Ink outperforms both in stealthiness and robustness.
- Score: 122.49388230821654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research shows deep neural networks are vulnerable to different types
of attacks, such as adversarial attack, data poisoning attack and backdoor
attack. Among them, backdoor attack is the most cunning one and can occur in
almost every stage of deep learning pipeline. Therefore, backdoor attack has
attracted lots of interests from both academia and industry. However, most
existing backdoor attack methods are either visible or fragile to some
effortless pre-processing such as common data transformations. To address these
limitations, we propose a robust and invisible backdoor attack called ``Poison
Ink''. Concretely, we first leverage the image structures as target poisoning
areas, and fill them with poison ink (information) to generate the trigger
pattern. As the image structure can keep its semantic meaning during the data
transformation, such trigger pattern is inherently robust to data
transformations. Then we leverage a deep injection network to embed such
trigger pattern into the cover image to achieve stealthiness. Compared to
existing popular backdoor attack methods, Poison Ink outperforms both in
stealthiness and robustness. Through extensive experiments, we demonstrate
Poison Ink is not only general to different datasets and network architectures,
but also flexible for different attack scenarios. Besides, it also has very
strong resistance against many state-of-the-art defense techniques.
Related papers
- Robustness-Inspired Defense Against Backdoor Attacks on Graph Neural Networks [30.82433380830665]
Graph Neural Networks (GNNs) have achieved promising results in tasks such as node classification and graph classification.
Recent studies reveal that GNNs are vulnerable to backdoor attacks, posing a significant threat to their real-world adoption.
We propose using random edge dropping to detect backdoors and theoretically show that it can efficiently distinguish poisoned nodes from clean ones.
arXiv Detail & Related papers (2024-06-14T08:46:26Z) - An Invisible Backdoor Attack Based On Semantic Feature [0.0]
Backdoor attacks have severely threatened deep neural network (DNN) models in the past several years.
We propose a novel backdoor attack, making imperceptible changes.
We evaluate our attack on three prominent image classification datasets.
arXiv Detail & Related papers (2024-05-19T13:50:40Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation [48.238349062995916]
We find that highly effective backdoors can be easily inserted using rotation-based image transformation.
Our work highlights a new, simple, physically realizable, and highly effective vector for backdoor attacks.
arXiv Detail & Related papers (2022-07-22T00:21:18Z) - Backdoor Attack in the Physical World [49.64799477792172]
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs)
Most existing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images.
We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2021-04-06T08:37:33Z) - Deep Feature Space Trojan Attack of Neural Networks by Controlled
Detoxification [21.631699720855995]
Trojan (backdoor) attack is a form of adversarial attack on deep neural networks.
We propose a novel deep feature space trojan attack with five characteristics.
arXiv Detail & Related papers (2020-12-21T09:46:12Z) - Clean-Label Backdoor Attacks on Video Recognition Models [87.46539956587908]
We show that image backdoor attacks are far less effective on videos.
We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models.
Our proposed backdoor attack is resistant to state-of-the-art backdoor defense/detection methods.
arXiv Detail & Related papers (2020-03-06T04:51:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.