Reliable Face Morphing Attack Detection in On-The-Fly Border Control
Scenario with Variation in Image Resolution and Capture Distance
- URL: http://arxiv.org/abs/2209.15474v1
- Date: Fri, 30 Sep 2022 13:58:43 GMT
- Title: Reliable Face Morphing Attack Detection in On-The-Fly Border Control
Scenario with Variation in Image Resolution and Capture Distance
- Authors: Jag Mohan Singh, Raghavendra Ramachandra
- Abstract summary: Face morphing attacks are highly potential in deceiving automatic FRS and human observers.
We present a novel Differential-MAD (D-MAD) algorithm based on the spherical distances and hierarchical fusion of deep features.
Experiments are carried out on the newly generated face morphing dataset (SCFace-Morph) based on the publicly available SCFace dataset.
- Score: 3.6833521970861685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face Recognition Systems (FRS) are vulnerable to various attacks performed
directly and indirectly. Among these attacks, face morphing attacks are highly
potential in deceiving automatic FRS and human observers and indicate a severe
security threat, especially in the border control scenario. This work presents
a face morphing attack detection, especially in the On-The-Fly (OTF) Automatic
Border Control (ABC) scenario. We present a novel Differential-MAD (D-MAD)
algorithm based on the spherical interpolation and hierarchical fusion of deep
features computed from six different pre-trained deep Convolutional Neural
Networks (CNNs). Extensive experiments are carried out on the newly generated
face morphing dataset (SCFace-Morph) based on the publicly available SCFace
dataset by considering the real-life scenario of Automatic Border Control (ABC)
gates. Experimental protocols are designed to benchmark the proposed and
state-of-the-art (SOTA) D-MAD techniques for different camera resolutions and
capture distances. Obtained results have indicated the superior performance of
the proposed D-MAD method compared to the existing methods.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - A visualization method for data domain changes in CNN networks and the optimization method for selecting thresholds in classification tasks [1.1118946307353794]
Face Anti-Spoofing (FAS) has played a crucial role in preserving the security of face recognition technology.
With the rise of counterfeit face generation techniques, the challenge posed by digitally edited faces to face anti-spoofing is escalating.
We propose a visualization method that intuitively reflects the training outcomes of models by visualizing the prediction results on datasets.
arXiv Detail & Related papers (2024-04-19T03:12:17Z) - Hierarchical Generative Network for Face Morphing Attacks [7.34597796509503]
Face morphing attacks circumvent face recognition systems (FRSs) by creating a morphed image that contains multiple identities.
We propose a novel morphing attack method to improve the quality of morphed images and better preserve the contributing identities.
arXiv Detail & Related papers (2024-03-17T06:09:27Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Fused Classification For Differential Face Morphing Detection [0.0]
Face morphing, a presentation attack technique, poses significant security risks to face recognition systems.
Traditional methods struggle to detect morphing attacks, which involve blending multiple face images.
We propose an extended approach based on fused classification method for no-reference scenario.
arXiv Detail & Related papers (2023-09-01T16:14:29Z) - COMICS: End-to-end Bi-grained Contrastive Learning for Multi-face Forgery Detection [56.7599217711363]
Face forgery recognition methods can only process one face at a time.
Most face forgery recognition methods can only process one face at a time.
We propose COMICS, an end-to-end framework for multi-face forgery detection.
arXiv Detail & Related papers (2023-08-03T03:37:13Z) - Multispectral Imaging for Differential Face Morphing Attack Detection: A
Preliminary Study [7.681417534211941]
This paper presents a multispectral framework for differential morphing-attack detection (D-MAD)
The proposed multispectral D-MAD framework introduce a multispectral image captured as a trusted capture to acquire seven different spectral bands to detect morphing attacks.
arXiv Detail & Related papers (2023-04-07T07:03:00Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z) - Face Anti-Spoofing by Learning Polarization Cues in a Real-World
Scenario [50.36920272392624]
Face anti-spoofing is the key to preventing security breaches in biometric recognition applications.
Deep learning method using RGB and infrared images demands a large amount of training data for new attacks.
We present a face anti-spoofing method in a real-world scenario by automatic learning the physical characteristics in polarization images of a real face.
arXiv Detail & Related papers (2020-03-18T03:04:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.