A Master Key Backdoor for Universal Impersonation Attack against
DNN-based Face Verification
- URL: http://arxiv.org/abs/2105.00249v1
- Date: Sat, 1 May 2021 13:51:33 GMT
- Title: A Master Key Backdoor for Universal Impersonation Attack against
DNN-based Face Verification
- Authors: Wei Guo, Benedetta Tondi and Mauro Barni
- Abstract summary: We introduce a new attack against face verification systems based on Deep Neural Networks (DNN)
The attack relies on the introduction into the network of a hidden backdoor, whose activation at test time induces a verification error allowing the attacker to impersonate any user.
We present a practical implementation of the attack targeting a Siamese-DNN face verification system, and show its effectiveness when the system is trained on VGGFace2 dataset and tested on LFW and YTF datasets.
- Score: 33.415612094924654
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a new attack against face verification systems based on Deep
Neural Networks (DNN). The attack relies on the introduction into the network
of a hidden backdoor, whose activation at test time induces a verification
error allowing the attacker to impersonate any user. The new attack, named
Master Key backdoor attack, operates by interfering with the training phase, so
to instruct the DNN to always output a positive verification answer when the
face of the attacker is presented at its input. With respect to existing
attacks, the new backdoor attack offers much more flexibility, since the
attacker does not need to know the identity of the victim beforehand. In this
way, he can deploy a Universal Impersonation attack in an open-set framework,
allowing him to impersonate any enrolled users, even those that were not yet
enrolled in the system when the attack was conceived. We present a practical
implementation of the attack targeting a Siamese-DNN face verification system,
and show its effectiveness when the system is trained on VGGFace2 dataset and
tested on LFW and YTF datasets. According to our experiments, the Master Key
backdoor attack provides a high attack success rate even when the ratio of
poisoned training data is as small as 0.01, thus raising a new alarm regarding
the use of DNN-based face verification systems in security-critical
applications.
Related papers
- Attacking by Aligning: Clean-Label Backdoor Attacks on Object Detection [24.271795745084123]
Deep neural networks (DNNs) have shown unprecedented success in object detection tasks.
Backdoor attacks on object detection tasks have not been properly investigated and explored.
We propose a simple yet effective backdoor attack method against object detection without modifying the ground truth annotations.
arXiv Detail & Related papers (2023-07-19T22:46:35Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Invisible Backdoor Attack with Dynamic Triggers against Person
Re-identification [71.80885227961015]
Person Re-identification (ReID) has rapidly progressed with wide real-world applications, but also poses significant risks of adversarial attacks.
We propose a novel backdoor attack on ReID under a new all-to-unknown scenario, called Dynamic Triggers Invisible Backdoor Attack (DT-IBA)
We extensively validate the effectiveness and stealthiness of the proposed attack on benchmark datasets, and evaluate the effectiveness of several defense methods against our attack.
arXiv Detail & Related papers (2022-11-20T10:08:28Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural
Networks [24.532269628999025]
Backdoor (Trojan) attacks are emerging threats against deep neural networks (DNN)
In this paper, we propose an "in-flight" defense against backdoor attacks on image classification.
arXiv Detail & Related papers (2021-12-06T20:52:00Z) - An Overview of Backdoor Attacks Against Deep Neural Networks and
Possible Defences [33.415612094924654]
The goal of this paper is to review the different types of attacks and defences proposed so far.
In a backdoor attack, the attacker corrupts the training data so to induce an erroneous behaviour at test time.
Test time errors are activated only in the presence of a triggering event corresponding to a properly crafted input sample.
arXiv Detail & Related papers (2021-11-16T13:06:31Z) - Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
Trained from Scratch [99.90716010490625]
Backdoor attackers tamper with training data to embed a vulnerability in models that are trained on that data.
This vulnerability is then activated at inference time by placing a "trigger" into the model's input.
We develop a new hidden trigger attack, Sleeper Agent, which employs gradient matching, data selection, and target model re-training during the crafting process.
arXiv Detail & Related papers (2021-06-16T17:09:55Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z) - Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition
Systems [0.0]
We propose a novel black-box backdoor attack technique on face recognition systems.
We show that the backdoor trigger can be quite effective, where the attack success rate can be up to $88%$.
We highlight that our study revealed a new physical backdoor attack, which calls for the attention of the security issue of the existing face recognition/verification techniques.
arXiv Detail & Related papers (2020-09-15T11:50:29Z) - Rethinking the Trigger of Backdoor Attack [83.98031510668619]
Currently, most of existing backdoor attacks adopted the setting of emphstatic trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area.
We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2020-04-09T17:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.