Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition
Systems
- URL: http://arxiv.org/abs/2009.06996v1
- Date: Tue, 15 Sep 2020 11:50:29 GMT
- Title: Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition
Systems
- Authors: Haoliang Li (1), Yufei Wang (1), Xiaofei Xie (1), Yang Liu (1), Shiqi
Wang (2), Renjie Wan (1), Lap-Pui Chau (1), and Alex C. Kot (1) ((1) Nanyang
Technological University, Singapore, (2) City University of Hong Kong)
- Abstract summary: We propose a novel black-box backdoor attack technique on face recognition systems.
We show that the backdoor trigger can be quite effective, where the attack success rate can be up to $88%$.
We highlight that our study revealed a new physical backdoor attack, which calls for the attention of the security issue of the existing face recognition/verification techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNN) have shown great success in many computer vision
applications. However, they are also known to be susceptible to backdoor
attacks. When conducting backdoor attacks, most of the existing approaches
assume that the targeted DNN is always available, and an attacker can always
inject a specific pattern to the training data to further fine-tune the DNN
model. However, in practice, such attack may not be feasible as the DNN model
is encrypted and only available to the secure enclave.
In this paper, we propose a novel black-box backdoor attack technique on face
recognition systems, which can be conducted without the knowledge of the
targeted DNN model. To be specific, we propose a backdoor attack with a novel
color stripe pattern trigger, which can be generated by modulating LED in a
specialized waveform. We also use an evolutionary computing strategy to
optimize the waveform for backdoor attack. Our backdoor attack can be conducted
in a very mild condition: 1) the adversary cannot manipulate the input in an
unnatural way (e.g., injecting adversarial noise); 2) the adversary cannot
access the training database; 3) the adversary has no knowledge of the training
model as well as the training set used by the victim party.
We show that the backdoor trigger can be quite effective, where the attack
success rate can be up to $88\%$ based on our simulation study and up to $40\%$
based on our physical-domain study by considering the task of face recognition
and verification based on at most three-time attempts during authentication.
Finally, we evaluate several state-of-the-art potential defenses towards
backdoor attacks, and find that our attack can still be effective. We highlight
that our study revealed a new physical backdoor attack, which calls for the
attention of the security issue of the existing face recognition/verification
techniques.
Related papers
- MakeupAttack: Feature Space Black-box Backdoor Attack on Face Recognition via Makeup Transfer [6.6251662169603005]
We propose a novel feature backdoor attack against face recognition via makeup transfer, dubbed MakeupAttack.
In our attack, we design an iterative training paradigm to learn the subtle features of the proposed makeup-style trigger.
The results demonstrate that our proposed attack method can bypass existing state-of-the-art defenses while maintaining effectiveness, robustness, naturalness, and stealthiness, without compromising model performance.
arXiv Detail & Related papers (2024-08-22T11:39:36Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Handcrafted Backdoors in Deep Neural Networks [33.21980707457639]
We introduce a handcrafted attack that directly manipulates the parameters of a pre-trained model to inject backdoors.
Our backdoors remain effective across four datasets and four network architectures with a success rate above 96%.
Our results suggest that further research is needed for understanding the complete space of supply-chain backdoor attacks.
arXiv Detail & Related papers (2021-06-08T20:58:23Z) - Backdoor Attack in the Physical World [49.64799477792172]
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs)
Most existing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images.
We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2021-04-06T08:37:33Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.