Invisible Backdoor Attack with Dynamic Triggers against Person
Re-identification
- URL: http://arxiv.org/abs/2211.10933v2
- Date: Wed, 10 May 2023 14:19:15 GMT
- Title: Invisible Backdoor Attack with Dynamic Triggers against Person
Re-identification
- Authors: Wenli Sun, Xinyang Jiang, Shuguang Dou, Dongsheng Li, Duoqian Miao,
Cheng Deng, Cairong Zhao
- Abstract summary: Person Re-identification (ReID) has rapidly progressed with wide real-world applications, but also poses significant risks of adversarial attacks.
We propose a novel backdoor attack on ReID under a new all-to-unknown scenario, called Dynamic Triggers Invisible Backdoor Attack (DT-IBA)
We extensively validate the effectiveness and stealthiness of the proposed attack on benchmark datasets, and evaluate the effectiveness of several defense methods against our attack.
- Score: 71.80885227961015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, person Re-identification (ReID) has rapidly progressed with
wide real-world applications, but also poses significant risks of adversarial
attacks. In this paper, we focus on the backdoor attack on deep ReID models.
Existing backdoor attack methods follow an all-to-one or all-to-all attack
scenario, where all the target classes in the test set have already been seen
in the training set. However, ReID is a much more complex fine-grained open-set
recognition problem, where the identities in the test set are not contained in
the training set. Thus, previous backdoor attack methods for classification are
not applicable for ReID. To ameliorate this issue, we propose a novel backdoor
attack on deep ReID under a new all-to-unknown scenario, called Dynamic
Triggers Invisible Backdoor Attack (DT-IBA). Instead of learning fixed triggers
for the target classes from the training set, DT-IBA can dynamically generate
new triggers for any unknown identities. Specifically, an identity hashing
network is proposed to first extract target identity information from a
reference image, which is then injected into the benign images by image
steganography. We extensively validate the effectiveness and stealthiness of
the proposed attack on benchmark datasets, and evaluate the effectiveness of
several defense methods against our attack.
Related papers
- NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian Noise [0.19820694575112383]
Backdoor attacks pose a significant threat when using third-party data for deep learning development.
We introduce a novel sample-specific multi-targeted backdoor attack, namely NoiseAttack.
This work is the first of its kind to launch a vision backdoor attack with the intent to generate multiple targeted classes.
arXiv Detail & Related papers (2024-09-03T19:24:46Z) - SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Narcissus: A Practical Clean-Label Backdoor Attack with Limited
Information [22.98039177091884]
"Clean-label" backdoor attacks require knowledge of the entire training set to be effective.
This paper provides an algorithm to mount clean-label backdoor attacks based only on the knowledge of representative examples from the target class.
Our attack works well across datasets and models, even when the trigger presents in the physical world.
arXiv Detail & Related papers (2022-04-11T16:58:04Z) - Backdoor Attack in the Physical World [49.64799477792172]
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs)
Most existing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images.
We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2021-04-06T08:37:33Z) - Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural
Networks for Detection and Training Set Cleansing [22.22337220509128]
Backdoor data poisoning is an emerging form of adversarial attack against deep neural network image classifiers.
In this paper, we make a break-through in defending backdoor attacks with imperceptible backdoor patterns.
We propose an optimization-based reverse-engineering defense, that jointly: 1) detects whether the training set is poisoned; 2) if so, identifies the target class and the training images with the backdoor pattern embedded; and 3) additionally, reversely engineers an estimate of the backdoor pattern used by the attacker.
arXiv Detail & Related papers (2020-10-15T03:12:24Z) - Rethinking the Trigger of Backdoor Attack [83.98031510668619]
Currently, most of existing backdoor attacks adopted the setting of emphstatic trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area.
We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2020-04-09T17:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.