Face Anti-Spoofing Via Disentangled Representation Learning
- URL: http://arxiv.org/abs/2008.08250v1
- Date: Wed, 19 Aug 2020 03:54:23 GMT
- Title: Face Anti-Spoofing Via Disentangled Representation Learning
- Authors: Ke-Yue Zhang, Taiping Yao, Jian Zhang, Ying Tai, Shouhong Ding, Jilin
Li, Feiyue Huang, Haichuan Song, Lizhuang Ma
- Abstract summary: Face anti-spoofing is crucial to security of face recognition systems.
We propose a novel perspective of face anti-spoofing that disentangles the liveness features and content features from images.
- Score: 90.90512800361742
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face anti-spoofing is crucial to security of face recognition systems.
Previous approaches focus on developing discriminative models based on the
features extracted from images, which may be still entangled between spoof
patterns and real persons. In this paper, motivated by the disentangled
representation learning, we propose a novel perspective of face anti-spoofing
that disentangles the liveness features and content features from images, and
the liveness features is further used for classification. We also put forward a
Convolutional Neural Network (CNN) architecture with the process of
disentanglement and combination of low-level and high-level supervision to
improve the generalization capabilities. We evaluate our method on public
benchmark datasets and extensive experimental results demonstrate the
effectiveness of our method against the state-of-the-art competitors. Finally,
we further visualize some results to help understand the effect and advantage
of disentanglement.
Related papers
- A visualization method for data domain changes in CNN networks and the optimization method for selecting thresholds in classification tasks [1.1118946307353794]
Face Anti-Spoofing (FAS) has played a crucial role in preserving the security of face recognition technology.
With the rise of counterfeit face generation techniques, the challenge posed by digitally edited faces to face anti-spoofing is escalating.
We propose a visualization method that intuitively reflects the training outcomes of models by visualizing the prediction results on datasets.
arXiv Detail & Related papers (2024-04-19T03:12:17Z) - Appearance Debiased Gaze Estimation via Stochastic Subject-Wise
Adversarial Learning [33.55397868171977]
Appearance-based gaze estimation has been attracting attention in computer vision, and remarkable improvements have been achieved using various deep learning techniques.
We propose a novel framework: subject-wise gaZE learning (SAZE), which trains a network to generalize the appearance of subjects.
Our experimental results verify the robustness of the method in that it yields state-of-the-art performance, achieving 3.89 and 4.42 on the MPIIGaze and EyeDiap datasets, respectively.
arXiv Detail & Related papers (2024-01-25T00:23:21Z) - Modeling Spoof Noise by De-spoofing Diffusion and its Application in
Face Anti-spoofing [40.82039387208269]
We present a pioneering attempt to employ diffusion models to denoise a spoof image and restore the genuine image.
The difference between these two images is considered as the spoof noise, which can serve as a discriminative cue for face anti-spoofing.
arXiv Detail & Related papers (2024-01-16T10:54:37Z) - A Closer Look at Geometric Temporal Dynamics for Face Anti-Spoofing [13.725319422213623]
Face anti-spoofing (FAS) is indispensable for a face recognition system.
We propose Geometry-Aware Interaction Network (GAIN) to distinguish between normal and abnormal movements of live and spoof presentations.
Our approach achieves state-of-the-art performance in the standard intra- and cross-dataset evaluations.
arXiv Detail & Related papers (2023-06-25T18:59:52Z) - ViCE: Self-Supervised Visual Concept Embeddings as Contextual and Pixel
Appearance Invariant Semantic Representations [77.3590853897664]
This work presents a self-supervised method to learn dense semantically rich visual embeddings for images inspired by methods for learning word embeddings in NLP.
arXiv Detail & Related papers (2021-11-24T12:27:30Z) - AGA-GAN: Attribute Guided Attention Generative Adversarial Network with
U-Net for Face Hallucination [15.010153819096056]
We propose an Attribute Guided Attention Generative Adversarial Network which employs attribute guided attention (AGA) modules to identify and focus the generation process on various facial features in the image.
AGA-GAN and AGA-GAN+U-Net framework outperforms several other cutting-edge face hallucination state-of-the-art methods.
arXiv Detail & Related papers (2021-11-20T13:43:03Z) - Detect and Locate: A Face Anti-Manipulation Approach with Semantic and
Noise-level Supervision [67.73180660609844]
We propose a conceptually simple but effective method to efficiently detect forged faces in an image.
The proposed scheme relies on a segmentation map that delivers meaningful high-level semantic information clues about the image.
The proposed model achieves state-of-the-art detection accuracy and remarkable localization performance.
arXiv Detail & Related papers (2021-07-13T02:59:31Z) - Progressive Spatio-Temporal Bilinear Network with Monte Carlo Dropout
for Landmark-based Facial Expression Recognition with Uncertainty Estimation [93.73198973454944]
The performance of our method is evaluated on three widely used datasets.
It is comparable to that of video-based state-of-the-art methods while it has much less complexity.
arXiv Detail & Related papers (2021-06-08T13:40:30Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.