Generalized Face Liveness Detection via De-spoofing Face Generator
- URL: http://arxiv.org/abs/2401.09006v1
- Date: Wed, 17 Jan 2024 06:59:32 GMT
- Title: Generalized Face Liveness Detection via De-spoofing Face Generator
- Authors: Xingming Long, Shiguang Shan and Jie Zhang
- Abstract summary: Previous Face Anti-spoofing (FAS) works face the challenge of generalizing in unseen domains.
We conduct an Anomalous cue Guided FAS (AG-FAS) method, which leverages real faces for improving model generalization via a De-spoofing Face Generator (DFG)
We then propose an Anomalous cue Guided FAS feature extraction Network (AG-Net) to further improve the FAS feature generalization via a cross-attention transformer.
- Score: 58.7043386978171
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous Face Anti-spoofing (FAS) works face the challenge of generalizing in
unseen domains. One of the major problems is that most existing FAS datasets
are relatively small and lack data diversity. However, we find that there are
numerous real faces that can be easily achieved under various conditions, which
are neglected by previous FAS works. In this paper, we conduct an Anomalous cue
Guided FAS (AG-FAS) method, which leverages real faces for improving model
generalization via a De-spoofing Face Generator (DFG). Specifically, the DFG
trained only on the real faces gains the knowledge of what a real face should
be like and can generate a "real" version of the face corresponding to any
given input face. The difference between the generated "real" face and the
input face can provide an anomalous cue for the downstream FAS task. We then
propose an Anomalous cue Guided FAS feature extraction Network (AG-Net) to
further improve the FAS feature generalization via a cross-attention
transformer. Extensive experiments on a total of nine public datasets show our
method achieves state-of-the-art results under cross-domain evaluations with
unseen scenarios and unknown presentation attacks.
Related papers
- FaceCat: Enhancing Face Recognition Security with a Unified Generative Model Framework [30.823325635144908]
Face anti-spoofing (FAS) and adversarial detection (FAD) have been regarded as critical technologies to ensure the safety of face recognition systems.
We propose FaceCat which utilizes the face generative model as a pre-trained model to improve the performance of FAS and FAD.
arXiv Detail & Related papers (2024-04-14T09:01:26Z) - Watch Out for the Confusing Faces: Detecting Face Swapping with the
Probability Distribution of Face Identification Models [37.49012763328351]
We propose a novel face swapping detection approach based on face identification probability distributions.
IdP_FSD is specially designed for detecting swapped faces whose identities belong to a finite set.
IdP_FSD exploits face swapping's common nature that the identity of swapped face combines that of two faces involved in swapping.
arXiv Detail & Related papers (2023-03-23T09:33:10Z) - Real Face Foundation Representation Learning for Generalized Deepfake
Detection [74.4691295738097]
The emergence of deepfake technologies has become a matter of social concern as they pose threats to individual privacy and public security.
It is almost impossible to collect sufficient representative fake faces, and it is hard for existing detectors to generalize to all types of manipulation.
We propose Real Face Foundation Representation Learning (RFFR), which aims to learn a general representation from large-scale real face datasets.
arXiv Detail & Related papers (2023-03-15T08:27:56Z) - FaceFormer: Scale-aware Blind Face Restoration with Transformers [18.514630131883536]
We propose a novel scale-aware blind face restoration framework, named FaceFormer, which formulates facial feature restoration as scale-aware transformation.
Our proposed method trained with synthetic dataset generalizes better to a natural low quality images than current state-of-the-arts.
arXiv Detail & Related papers (2022-07-20T10:08:34Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - SuperFront: From Low-resolution to High-resolution Frontal Face
Synthesis [65.35922024067551]
We propose a generative adversarial network (GAN) -based model to generate high-quality, identity preserving frontal faces.
Specifically, we propose SuperFront-GAN to synthesize a high-resolution (HR), frontal face from one-to-many LR faces with various poses.
We integrate a super-resolution side-view module into SF-GAN to preserve identity information and fine details of the side-views in HR space.
arXiv Detail & Related papers (2020-12-07T23:30:28Z) - Single-Side Domain Generalization for Face Anti-Spoofing [91.79161815884126]
We propose an end-to-end single-side domain generalization framework to improve the generalization ability of face anti-spoofing.
Our proposed approach is effective and outperforms the state-of-the-art methods on four public databases.
arXiv Detail & Related papers (2020-04-29T09:32:54Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.