Quadruplet Loss For Improving the Robustness to Face Morphing Attacks
- URL: http://arxiv.org/abs/2402.14665v1
- Date: Thu, 22 Feb 2024 16:10:39 GMT
- Title: Quadruplet Loss For Improving the Robustness to Face Morphing Attacks
- Authors: Iurii Medvedev and Nuno Gon\c{c}alves
- Abstract summary: Face Recognition Systems are vulnerable to sophisticated attacks, notably face morphing techniques.
We introduce a novel quadruplet loss function for increasing the robustness of face recognition systems against morphing attacks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in deep learning have revolutionized technology and
security measures, necessitating robust identification methods. Biometric
approaches, leveraging personalized characteristics, offer a promising
solution. However, Face Recognition Systems are vulnerable to sophisticated
attacks, notably face morphing techniques, enabling the creation of fraudulent
documents. In this study, we introduce a novel quadruplet loss function for
increasing the robustness of face recognition systems against morphing attacks.
Our approach involves specific sampling of face image quadruplets, combined
with face morphs, for network training. Experimental results demonstrate the
efficiency of our strategy in improving the robustness of face recognition
networks against morphing attacks.
Related papers
- TetraLoss: Improving the Robustness of Face Recognition against Morphing
Attacks [7.092869001331781]
Face recognition systems are widely deployed in high-security applications.
Digital manipulations, such as face morphing, pose a security threat to face recognition systems.
We present a novel method for adapting deep learning-based face recognition systems to be more robust against face morphing attacks.
arXiv Detail & Related papers (2024-01-21T21:04:05Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Fused Classification For Differential Face Morphing Detection [0.0]
Face morphing, a presentation attack technique, poses significant security risks to face recognition systems.
Traditional methods struggle to detect morphing attacks, which involve blending multiple face images.
We propose an extended approach based on fused classification method for no-reference scenario.
arXiv Detail & Related papers (2023-09-01T16:14:29Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - FACESEC: A Fine-grained Robustness Evaluation Framework for Face
Recognition Systems [49.577302852655144]
FACESEC is a framework for fine-grained robustness evaluation of face recognition systems.
We study five face recognition systems in both closed-set and open-set settings.
We find that accurate knowledge of neural architecture is significantly more important than knowledge of the training data in black-box attacks.
arXiv Detail & Related papers (2021-04-08T23:00:25Z) - Vulnerability Analysis of Face Morphing Attacks from Landmarks and
Generative Adversarial Networks [0.8602553195689513]
This paper provides a new dataset with four different types of morphing attacks based on OpenCV, FaceMorpher, WebMorph and a generative adversarial network (StyleGAN)
We also conduct extensive experiments to assess the vulnerability of the state-of-the-art face recognition systems, notably FaceNet, VGG-Face, and ArcFace.
arXiv Detail & Related papers (2020-12-09T22:10:17Z) - MIPGAN -- Generating Strong and High Quality Morphing Attacks Using
Identity Prior Driven GAN [22.220940043294334]
We present a new approach for generating strong attacks using an Identity Prior Driven Generative Adversarial Network.
The proposed MIPGAN is derived from the StyleGAN with a newly formulated loss function exploiting perceptual quality and identity factor.
We demonstrate the proposed approach's applicability to generate strong morphing attacks by evaluating its vulnerability against both commercial and deep learning based Face Recognition System.
arXiv Detail & Related papers (2020-09-03T15:08:38Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z) - On the Robustness of Face Recognition Algorithms Against Attacks and
Bias [78.68458616687634]
Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications.
Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged.
This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged.
arXiv Detail & Related papers (2020-02-07T18:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.