PipeNet: Selective Modal Pipeline of Fusion Network for Multi-Modal Face
Anti-Spoofing
- URL: http://arxiv.org/abs/2004.11744v1
- Date: Fri, 24 Apr 2020 13:46:00 GMT
- Title: PipeNet: Selective Modal Pipeline of Fusion Network for Multi-Modal Face
Anti-Spoofing
- Authors: Qing Yang, Xia Zhu, Jong-Kae Fwu, Yun Ye, Ganmei You, and Yuan Zhu
- Abstract summary: We propose a novel pipeline-based multi-stream CNN architecture called PipeNet for multi-modal face anti-spoofing.
The proposed method wins the third place in the final ranking of Chalearn Multi-modal Cross-ethnicity Face Anti-spoofing Recognition Challenge@CVPR 2020.
- Score: 6.655217883712949
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Face anti-spoofing has become an increasingly important and critical security
feature for authentication systems, due to rampant and easily launchable
presentation attacks. Addressing the shortage of multi-modal face dataset,
CASIA recently released the largest up-to-date CASIA-SURF Cross-ethnicity Face
Anti-spoofing(CeFA) dataset, covering 3 ethnicities, 3 modalities, 1607
subjects, and 2D plus 3D attack types in four protocols, and focusing on the
challenge of improving the generalization capability of face anti-spoofing in
cross-ethnicity and multi-modal continuous data. In this paper, we propose a
novel pipeline-based multi-stream CNN architecture called PipeNet for
multi-modal face anti-spoofing. Unlike previous works, Selective Modal Pipeline
(SMP) is designed to enable a customized pipeline for each data modality to
take full advantage of multi-modal data. Limited Frame Vote (LFV) is designed
to ensure stable and accurate prediction for video classification. The proposed
method wins the third place in the final ranking of Chalearn Multi-modal
Cross-ethnicity Face Anti-spoofing Recognition Challenge@CVPR2020. Our final
submission achieves the Average Classification Error Rate (ACER) of 2.21 with
Standard Deviation of 1.26 on the test set.
Related papers
- Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Wild Face Anti-Spoofing Challenge 2023: Benchmark and Results [73.98594459933008]
Face anti-spoofing (FAS) is an essential mechanism for safeguarding the integrity of automated face recognition systems.
This limitation can be attributed to the scarcity and lack of diversity in publicly available FAS datasets.
We introduce the Wild Face Anti-Spoofing dataset, a large-scale, diverse FAS dataset collected in unconstrained settings.
arXiv Detail & Related papers (2023-04-12T10:29:42Z) - M3FAS: An Accurate and Robust MultiModal Mobile Face Anti-Spoofing System [39.37647248710612]
Face presentation attacks (FPA) have brought increasing concerns to the public through various malicious applications.
We devise an accurate and robust MultiModal Mobile Face Anti-Spoofing system named M3FAS.
arXiv Detail & Related papers (2023-01-30T12:37:04Z) - Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth
Uncertainty Learning [54.15303628138665]
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks.
Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance.
We propose Dual Spoof Disentanglement Generation framework to tackle this challenge by "anti-spoofing via generation"
arXiv Detail & Related papers (2021-12-01T15:36:59Z) - Two-stream Convolutional Networks for Multi-frame Face Anti-spoofing [1.9890930069402575]
We propose an efficient two-stream model to capture the key differences between live and spoof faces.
We evaluate the proposed method on the datasets of Siw, Oulu-NPU, CASIA-MFSD and Replay-Attack.
arXiv Detail & Related papers (2021-08-09T13:35:30Z) - Face Anti-Spoofing with Human Material Perception [76.4844593082362]
Face anti-spoofing (FAS) plays a vital role in securing the face recognition systems from presentation attacks.
We rephrase face anti-spoofing as a material recognition problem and combine it with classical human material perception.
We propose the Bilateral Convolutional Networks (BCN), which is able to capture intrinsic material-based patterns.
arXiv Detail & Related papers (2020-07-04T18:25:53Z) - Cross-ethnicity Face Anti-spoofing Recognition Challenge: A Review [79.49390241265337]
Chalearn Face Anti-spoofing Attack Detection Challenge consists of single-modal (e.g., RGB) and multi-modal (e.g., RGB, Depth, Infrared (IR)) tracks.
This paper presents an overview of the challenge, including its design, evaluation protocol and a summary of results.
arXiv Detail & Related papers (2020-04-23T06:43:08Z) - CASIA-SURF CeFA: A Benchmark for Multi-modal Cross-ethnicity Face
Anti-spoofing [83.05878126420706]
We introduce the largest up to date CASIA-SURF Cross-ethnicity Face Anti-spoofing dataset (CeFA)
CeFA is the first dataset including explicit ethnic labels in current published/released datasets for face anti-spoofing.
We propose a novel multi-modal fusion method as a strong baseline to alleviate these bias.
arXiv Detail & Related papers (2020-03-11T06:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.