A Multi-Modal Approach for Face Anti-Spoofing in Non-Calibrated Systems using Disparity Maps
- URL: http://arxiv.org/abs/2410.24031v1
- Date: Thu, 31 Oct 2024 15:29:51 GMT
- Title: A Multi-Modal Approach for Face Anti-Spoofing in Non-Calibrated Systems using Disparity Maps
- Authors: Ariel Larey, Eyal Rond, Omer Achrack,
- Abstract summary: Face recognition technologies are vulnerable to face spoofing attacks.
stereo-depth cameras can detect such attacks effectively, but their high-cost limits their widespread adoption.
We propose a method to overcome this challenge by leveraging facial attributes to derive disparity information.
- Score: 0.6144680854063939
- License:
- Abstract: Face recognition technologies are increasingly used in various applications, yet they are vulnerable to face spoofing attacks. These spoofing attacks often involve unique 3D structures, such as printed papers or mobile device screens. Although stereo-depth cameras can detect such attacks effectively, their high-cost limits their widespread adoption. Conversely, two-sensor systems without extrinsic calibration offer a cost-effective alternative but are unable to calculate depth using stereo techniques. In this work, we propose a method to overcome this challenge by leveraging facial attributes to derive disparity information and estimate relative depth for anti-spoofing purposes, using non-calibrated systems. We introduce a multi-modal anti-spoofing model, coined Disparity Model, that incorporates created disparity maps as a third modality alongside the two original sensor modalities. We demonstrate the effectiveness of the Disparity Model in countering various spoof attacks using a comprehensive dataset collected from the Intel RealSense ID Solution F455. Our method outperformed existing methods in the literature, achieving an Equal Error Rate (EER) of 1.71% and a False Negative Rate (FNR) of 2.77% at a False Positive Rate (FPR) of 1%. These errors are lower by 2.45% and 7.94% than the errors of the best comparison method, respectively. Additionally, we introduce a model ensemble that addresses 3D spoof attacks as well, achieving an EER of 2.04% and an FNR of 3.83% at an FPR of 1%. Overall, our work provides a state-of-the-art solution for the challenging task of anti-spoofing in non-calibrated systems that lack depth information.
Related papers
- ODDR: Outlier Detection & Dimension Reduction Based Defense Against Adversarial Patches [4.4100683691177816]
Adversarial attacks present a significant challenge to the dependable deployment of machine learning models.
We propose Outlier Detection and Dimension Reduction (ODDR), a comprehensive defense strategy to counteract patch-based adversarial attacks.
Our approach is based on the observation that input features corresponding to adversarial patches can be identified as outliers.
arXiv Detail & Related papers (2023-11-20T11:08:06Z) - A Universal Anti-Spoofing Approach for Contactless Fingerprint Biometric
Systems [0.0]
We propose a universal presentation attack detection method for contactless fingerprints.
We generated synthetic contactless fingerprints using StyleGAN from live finger photos and integrating them to train a semi-supervised ResNet-18 model.
A novel joint loss function, combining the Arcface and Center loss, is introduced with a regularization to balance between the two loss functions.
arXiv Detail & Related papers (2023-10-23T15:46:47Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Towards Effective Adversarial Textured 3D Meshes on Physical Face
Recognition [42.60954035488262]
The goal of this work is to develop a more reliable technique that can carry out an end-to-end evaluation of adversarial robustness for commercial systems.
We design adversarial textured 3D meshes (AT3D) with an elaborate topology on a human face, which can be 3D-printed and pasted on the attacker's face to evade the defenses.
To deviate from the mesh-based space, we propose to perturb the low-dimensional coefficient space based on 3D Morphable Model.
arXiv Detail & Related papers (2023-03-28T08:42:54Z) - M3FAS: An Accurate and Robust MultiModal Mobile Face Anti-Spoofing System [39.37647248710612]
Face presentation attacks (FPA) have brought increasing concerns to the public through various malicious applications.
We devise an accurate and robust MultiModal Mobile Face Anti-Spoofing system named M3FAS.
arXiv Detail & Related papers (2023-01-30T12:37:04Z) - Face Presentation Attack Detection [59.05779913403134]
Face recognition technology has been widely used in daily interactive applications such as checking-in and mobile payment.
However, its vulnerability to presentation attacks (PAs) limits its reliable use in ultra-secure applicational scenarios.
arXiv Detail & Related papers (2022-12-07T14:51:17Z) - Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth
Uncertainty Learning [54.15303628138665]
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks.
Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance.
We propose Dual Spoof Disentanglement Generation framework to tackle this challenge by "anti-spoofing via generation"
arXiv Detail & Related papers (2021-12-01T15:36:59Z) - Fingerprint Presentation Attack Detection by Channel-wise Feature
Denoising [18.933809452711163]
Fingerprint recognition systems (AFRSs) are vulnerable to malicious attacks.
Current Fingerprint Presentation Attack Detection methods often have poor robustness under new attack materials or sensor settings.
This paper proposes a novel Channel-wise Feature Denoising fingerprint PAD (CFD-PAD) method.
arXiv Detail & Related papers (2021-11-15T09:13:21Z) - Transferable, Controllable, and Inconspicuous Adversarial Attacks on
Person Re-identification With Deep Mis-Ranking [83.48804199140758]
We propose a learning-to-mis-rank formulation to perturb the ranking of the system output.
We also perform a back-box attack by developing a novel multi-stage network architecture.
Our method can control the number of malicious pixels by using differentiable multi-shot sampling.
arXiv Detail & Related papers (2020-04-08T18:48:29Z) - Deep Spatial Gradient and Temporal Depth Learning for Face Anti-spoofing [61.82466976737915]
Depth supervised learning has been proven as one of the most effective methods for face anti-spoofing.
We propose a new approach to detect presentation attacks from multiple frames based on two insights.
The proposed approach achieves state-of-the-art results on five benchmark datasets.
arXiv Detail & Related papers (2020-03-18T06:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.