Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in
Face Recognition to Prevent Potential Privacy Breaches
- URL: http://arxiv.org/abs/2202.10320v1
- Date: Fri, 18 Feb 2022 13:53:55 GMT
- Title: Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in
Face Recognition to Prevent Potential Privacy Breaches
- Authors: Reena Zelenkova, Jack Swallow, M.A.P. Chamikara, Dongxi Liu, Mohan
Baruwal Chhetri, Seyit Camtepe, Marthie Grobler, Mahathir Almashor
- Abstract summary: Deep learning is widely utilized for face recognition (FR)
However, such models are vulnerable to backdoor attacks executed by malicious parties.
We propose BA-BAM: Biometric Authentication - Backdoor Attack Mitigation.
- Score: 7.436067208838344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Biometric data, such as face images, are often associated with sensitive
information (e.g medical, financial, personal government records). Hence, a
data breach in a system storing such information can have devastating
consequences. Deep learning is widely utilized for face recognition (FR);
however, such models are vulnerable to backdoor attacks executed by malicious
parties. Backdoor attacks cause a model to misclassify a particular class as a
target class during recognition. This vulnerability can allow adversaries to
gain access to highly sensitive data protected by biometric authentication
measures or allow the malicious party to masquerade as an individual with
higher system permissions. Such breaches pose a serious privacy threat.
Previous methods integrate noise addition mechanisms into face recognition
models to mitigate this issue and improve the robustness of classification
against backdoor attacks. However, this can drastically affect model accuracy.
We propose a novel and generalizable approach (named BA-BAM: Biometric
Authentication - Backdoor Attack Mitigation), that aims to prevent backdoor
attacks on face authentication deep learning models through transfer learning
and selective image perturbation. The empirical evidence shows that BA-BAM is
highly robust and incurs a maximal accuracy drop of 2.4%, while reducing the
attack success rate to a maximum of 20%. Comparisons with existing approaches
show that BA-BAM provides a more practical backdoor mitigation approach for
face recognition.
Related papers
- Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery
Detection [62.595450266262645]
This paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.
By embedding backdoors into models, attackers can deceive detectors into producing erroneous predictions for forged faces.
We propose emphPoisoned Forgery Face framework, which enables clean-label backdoor attacks on face forgery detectors.
arXiv Detail & Related papers (2024-02-18T06:31:05Z) - Privacy-Preserving Face Recognition in Hybrid Frequency-Color Domain [16.05230409730324]
Face image is a sensitive biometric attribute tied to the identity information of each user.
This paper proposes a hybrid frequency-color fusion approach to reduce the input dimensionality of face recognition.
It has around 2.6% to 4.2% higher accuracy than the state-of-the-art in the 1:N verification scenario.
arXiv Detail & Related papers (2024-01-24T11:27:32Z) - Optimizing Key-Selection for Face-based One-Time Biometrics via Morphing [10.057840103622766]
Facial recognition systems are still vulnerable to adversarial attacks.
In this paper, we propose different key selection strategies to improve the security of a competitive cancelable scheme.
arXiv Detail & Related papers (2023-10-04T17:32:32Z) - RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely
Limited Queries [2.8532545355403123]
Recent successful adversarial attacks on face recognition show that, despite the remarkable progress of face recognition models, they are still far behind the human intelligence for perception and recognition.
In this paper, we propose automatic face warping which needs extremely limited number of queries to fool the target model.
We evaluate the robustness of proposed method in the decision-based black-box attack setting.
arXiv Detail & Related papers (2022-07-04T00:22:45Z) - PASS: Protected Attribute Suppression System for Mitigating Bias in Face
Recognition [55.858374644761525]
Face recognition networks encode information about sensitive attributes while being trained for identity classification.
Existing bias mitigation approaches require end-to-end training and are unable to achieve high verification accuracy.
We present a descriptors-based adversarial de-biasing approach called Protected Attribute Suppression System ( PASS)'
Pass can be trained on top of descriptors obtained from any previously trained high-performing network to classify identities and simultaneously reduce encoding of sensitive attributes.
arXiv Detail & Related papers (2021-08-09T00:39:22Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.