MorphGuard: Morph Specific Margin Loss for Enhancing Robustness to Face Morphing Attacks
- URL: http://arxiv.org/abs/2505.10497v1
- Date: Thu, 15 May 2025 17:00:16 GMT
- Title: MorphGuard: Morph Specific Margin Loss for Enhancing Robustness to Face Morphing Attacks
- Authors: Iurii Medvedev, Nuno Goncalves,
- Abstract summary: We propose a novel approach for training deep networks for face recognition with enhanced robustness to face morphing attacks.<n>Our method modifies the classification task by introducing a dual-branch classification strategy that effectively handles the ambiguity in the labeling of face morphs.<n>Our strategy has been validated on public benchmarks, demonstrating its effectiveness in enhancing robustness against face morphing attacks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face recognition has evolved significantly with the advancement of deep learning techniques, enabling its widespread adoption in various applications requiring secure authentication. However, this progress has also increased its exposure to presentation attacks, including face morphing, which poses a serious security threat by allowing one identity to impersonate another. Therefore, modern face recognition systems must be robust against such attacks. In this work, we propose a novel approach for training deep networks for face recognition with enhanced robustness to face morphing attacks. Our method modifies the classification task by introducing a dual-branch classification strategy that effectively handles the ambiguity in the labeling of face morphs. This adaptation allows the model to incorporate morph images into the training process, improving its ability to distinguish them from bona fide samples. Our strategy has been validated on public benchmarks, demonstrating its effectiveness in enhancing robustness against face morphing attacks. Furthermore, our approach is universally applicable and can be integrated into existing face recognition training pipelines to improve classification-based recognition methods.
Related papers
- LADIMO: Face Morph Generation through Biometric Template Inversion with Latent Diffusion [5.602947425285195]
Face morphing attacks pose a severe security threat to face recognition systems.
We present a representation-level face morphing approach, namely LADIMO, that performs morphing on two face recognition embeddings.
We show that each face morph variant has an individual attack success rate, enabling us to maximize the morph attack potential.
arXiv Detail & Related papers (2024-10-10T14:41:37Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Quadruplet Loss For Improving the Robustness to Face Morphing Attacks [0.0]
Face Recognition Systems are vulnerable to sophisticated attacks, notably face morphing techniques.
We introduce a novel quadruplet loss function for increasing the robustness of face recognition systems against morphing attacks.
arXiv Detail & Related papers (2024-02-22T16:10:39Z) - TetraLoss: Improving the Robustness of Face Recognition against Morphing Attacks [6.492755549391469]
Face recognition systems are widely deployed in high-security applications.<n>Digital manipulations, such as face morphing, pose a security threat to face recognition systems.<n>We present a novel method for adapting deep learning-based face recognition systems to be more robust against face morphing attacks.
arXiv Detail & Related papers (2024-01-21T21:04:05Z) - Fused Classification For Differential Face Morphing Detection [0.0]
Face morphing, a presentation attack technique, poses significant security risks to face recognition systems.
Traditional methods struggle to detect morphing attacks, which involve blending multiple face images.
We propose an extended approach based on fused classification method for no-reference scenario.
arXiv Detail & Related papers (2023-09-01T16:14:29Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - MorDeephy: Face Morphing Detection Via Fused Classification [0.0]
We introduce a novel deep learning strategy for a single image face morphing detection.
It is directed onto learning the deep facial features, which carry information about the authenticity of these features.
Our method, which we call MorDeephy, achieved the state of the art performance and demonstrated a prominent ability for generalising the task of morphing detection to unseen scenarios.
arXiv Detail & Related papers (2022-08-05T11:39:22Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z) - On the Robustness of Face Recognition Algorithms Against Attacks and
Bias [78.68458616687634]
Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications.
Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged.
This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged.
arXiv Detail & Related papers (2020-02-07T18:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.