Optimizing Key-Selection for Face-based One-Time Biometrics via Morphing
- URL: http://arxiv.org/abs/2310.02997v1
- Date: Wed, 4 Oct 2023 17:32:32 GMT
- Title: Optimizing Key-Selection for Face-based One-Time Biometrics via Morphing
- Authors: Daile Osorio-Roig, Mahdi Ghafourian, Christian Rathgeb, Ruben
Vera-Rodriguez, Christoph Busch, Julian Fierrez
- Abstract summary: Facial recognition systems are still vulnerable to adversarial attacks.
In this paper, we propose different key selection strategies to improve the security of a competitive cancelable scheme.
- Score: 10.057840103622766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nowadays, facial recognition systems are still vulnerable to adversarial
attacks. These attacks vary from simple perturbations of the input image to
modifying the parameters of the recognition model to impersonate an authorised
subject. So-called privacy-enhancing facial recognition systems have been
mostly developed to provide protection of stored biometric reference data, i.e.
templates. In the literature, privacy-enhancing facial recognition approaches
have focused solely on conventional security threats at the template level,
ignoring the growing concern related to adversarial attacks. Up to now, few
works have provided mechanisms to protect face recognition against adversarial
attacks while maintaining high security at the template level. In this paper,
we propose different key selection strategies to improve the security of a
competitive cancelable scheme operating at the signal level. Experimental
results show that certain strategies based on signal-level key selection can
lead to complete blocking of the adversarial attack based on an iterative
optimization for the most secure threshold, while for the most practical
threshold, the attack success chance can be decreased to approximately 5.0%.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Detecting Adversarial Faces Using Only Real Face Self-Perturbations [36.26178169550577]
Adrial attacks aim to disturb the functionality of a target system by adding specific noise to the input samples.
Existing defense techniques achieve high accuracy in detecting some specific adversarial faces (adv-faces)
New attack methods especially GAN-based attacks with completely different noise patterns circumvent them and reach a higher attack success rate.
arXiv Detail & Related papers (2023-04-22T09:55:48Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely
Limited Queries [2.8532545355403123]
Recent successful adversarial attacks on face recognition show that, despite the remarkable progress of face recognition models, they are still far behind the human intelligence for perception and recognition.
In this paper, we propose automatic face warping which needs extremely limited number of queries to fool the target model.
We evaluate the robustness of proposed method in the decision-based black-box attack setting.
arXiv Detail & Related papers (2022-07-04T00:22:45Z) - Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in
Face Recognition to Prevent Potential Privacy Breaches [7.436067208838344]
Deep learning is widely utilized for face recognition (FR)
However, such models are vulnerable to backdoor attacks executed by malicious parties.
We propose BA-BAM: Biometric Authentication - Backdoor Attack Mitigation.
arXiv Detail & Related papers (2022-02-18T13:53:55Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - On the Effectiveness of Vision Transformers for Zero-shot Face
Anti-Spoofing [7.665392786787577]
In this work, we use transfer learning from the vision transformer model for the zero-shot anti-spoofing task.
The proposed approach outperforms the state-of-the-art methods in the zero-shot protocols in the HQ-WMCA and SiW-M datasets by a large margin.
arXiv Detail & Related papers (2020-11-16T15:14:59Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.