Face Anti-Spoofing with Human Material Perception
- URL: http://arxiv.org/abs/2007.02157v1
- Date: Sat, 4 Jul 2020 18:25:53 GMT
- Title: Face Anti-Spoofing with Human Material Perception
- Authors: Zitong Yu, Xiaobai Li, Xuesong Niu, Jingang Shi, Guoying Zhao
- Abstract summary: Face anti-spoofing (FAS) plays a vital role in securing the face recognition systems from presentation attacks.
We rephrase face anti-spoofing as a material recognition problem and combine it with classical human material perception.
We propose the Bilateral Convolutional Networks (BCN), which is able to capture intrinsic material-based patterns.
- Score: 76.4844593082362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face anti-spoofing (FAS) plays a vital role in securing the face recognition
systems from presentation attacks. Most existing FAS methods capture various
cues (e.g., texture, depth and reflection) to distinguish the live faces from
the spoofing faces. All these cues are based on the discrepancy among physical
materials (e.g., skin, glass, paper and silicone). In this paper we rephrase
face anti-spoofing as a material recognition problem and combine it with
classical human material perception [1], intending to extract discriminative
and robust features for FAS. To this end, we propose the Bilateral
Convolutional Networks (BCN), which is able to capture intrinsic material-based
patterns via aggregating multi-level bilateral macro- and micro- information.
Furthermore, Multi-level Feature Refinement Module (MFRM) and multi-head
supervision are utilized to learn more robust features. Comprehensive
experiments are performed on six benchmark datasets, and the proposed method
achieves superior performance on both intra- and cross-dataset testings. One
highlight is that we achieve overall 11.3$\pm$9.5\% EER for cross-type testing
in SiW-M dataset, which significantly outperforms previous results. We hope
this work will facilitate future cooperation between FAS and material
communities.
Related papers
- SHIELD : An Evaluation Benchmark for Face Spoofing and Forgery Detection
with Multimodal Large Language Models [63.946809247201905]
We introduce a new benchmark, namely SHIELD, to evaluate the ability of MLLMs on face spoofing and forgery detection.
We design true/false and multiple-choice questions to evaluate multimodal face data in these two face security tasks.
The results indicate that MLLMs hold substantial potential in the face security domain.
arXiv Detail & Related papers (2024-02-06T17:31:36Z) - A Closer Look at Geometric Temporal Dynamics for Face Anti-Spoofing [13.725319422213623]
Face anti-spoofing (FAS) is indispensable for a face recognition system.
We propose Geometry-Aware Interaction Network (GAIN) to distinguish between normal and abnormal movements of live and spoof presentations.
Our approach achieves state-of-the-art performance in the standard intra- and cross-dataset evaluations.
arXiv Detail & Related papers (2023-06-25T18:59:52Z) - Wild Face Anti-Spoofing Challenge 2023: Benchmark and Results [73.98594459933008]
Face anti-spoofing (FAS) is an essential mechanism for safeguarding the integrity of automated face recognition systems.
This limitation can be attributed to the scarcity and lack of diversity in publicly available FAS datasets.
We introduce the Wild Face Anti-Spoofing dataset, a large-scale, diverse FAS dataset collected in unconstrained settings.
arXiv Detail & Related papers (2023-04-12T10:29:42Z) - Benchmarking Joint Face Spoofing and Forgery Detection with Visual and
Physiological Cues [81.15465149555864]
We establish the first joint face spoofing and detection benchmark using both visual appearance and physiological r cues.
To enhance the r periodicity discrimination, we design a two-branch physiological network using both facial powerful rtemporal signal map and its continuous wavelet transformed counterpart as inputs.
arXiv Detail & Related papers (2022-08-10T15:41:48Z) - Consistency Regularization for Deep Face Anti-Spoofing [69.70647782777051]
Face anti-spoofing (FAS) plays a crucial role in securing face recognition systems.
Motivated by this exciting observation, we conjecture that encouraging feature consistency of different views may be a promising way to boost FAS models.
We enhance both Embedding-level and Prediction-level Consistency Regularization (EPCR) in FAS.
arXiv Detail & Related papers (2021-11-24T08:03:48Z) - Learning Meta Pattern for Face Anti-Spoofing [26.82129880310214]
Face Anti-Spoofing (FAS) is essential to secure face recognition systems.
Recent hybrid methods have been explored to extract task-aware handcrafted features.
We propose a learnable network to extract Meta Pattern (MP) in our learning-to-learn framework.
arXiv Detail & Related papers (2021-10-13T14:34:20Z) - Two-stream Convolutional Networks for Multi-frame Face Anti-spoofing [1.9890930069402575]
We propose an efficient two-stream model to capture the key differences between live and spoof faces.
We evaluate the proposed method on the datasets of Siw, Oulu-NPU, CASIA-MFSD and Replay-Attack.
arXiv Detail & Related papers (2021-08-09T13:35:30Z) - CASIA-SURF CeFA: A Benchmark for Multi-modal Cross-ethnicity Face
Anti-spoofing [83.05878126420706]
We introduce the largest up to date CASIA-SURF Cross-ethnicity Face Anti-spoofing dataset (CeFA)
CeFA is the first dataset including explicit ethnic labels in current published/released datasets for face anti-spoofing.
We propose a novel multi-modal fusion method as a strong baseline to alleviate these bias.
arXiv Detail & Related papers (2020-03-11T06:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.