Learning Meta Pattern for Face Anti-Spoofing
- URL: http://arxiv.org/abs/2110.06753v1
- Date: Wed, 13 Oct 2021 14:34:20 GMT
- Title: Learning Meta Pattern for Face Anti-Spoofing
- Authors: Rizhao Cai, Zhi Li, Renjie Wan, Haoliang Li, Yongjian Hu, Alex
Chichung Kot
- Abstract summary: Face Anti-Spoofing (FAS) is essential to secure face recognition systems.
Recent hybrid methods have been explored to extract task-aware handcrafted features.
We propose a learnable network to extract Meta Pattern (MP) in our learning-to-learn framework.
- Score: 26.82129880310214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face Anti-Spoofing (FAS) is essential to secure face recognition systems and
has been extensively studied in recent years. Although deep neural networks
(DNNs) for the FAS task have achieved promising results in intra-dataset
experiments with similar distributions of training and testing data, the DNNs'
generalization ability is limited under the cross-domain scenarios with
different distributions of training and testing data. To improve the
generalization ability, recent hybrid methods have been explored to extract
task-aware handcrafted features (e.g., Local Binary Pattern) as discriminative
information for the input of DNNs. However, the handcrafted feature extraction
relies on experts' domain knowledge, and how to choose appropriate handcrafted
features is underexplored. To this end, we propose a learnable network to
extract Meta Pattern (MP) in our learning-to-learn framework. By replacing
handcrafted features with the MP, the discriminative information from MP is
capable of learning a more generalized model. Moreover, we devise a two-stream
network to hierarchically fuse the input RGB image and the extracted MP by
using our proposed Hierarchical Fusion Module (HFM). We conduct comprehensive
experiments and show that our MP outperforms the compared handcrafted features.
Also, our proposed method with HFM and the MP can achieve state-of-the-art
performance on two different domain generalization evaluation benchmarks.
Related papers
- Mixture-of-Noises Enhanced Forgery-Aware Predictor for Multi-Face Manipulation Detection and Localization [52.87635234206178]
This paper proposes a new framework, namely MoNFAP, specifically tailored for multi-face manipulation detection and localization.
The framework incorporates two novel modules: the Forgery-aware Unified Predictor (FUP) Module and the Mixture-of-Noises Module (MNM)
arXiv Detail & Related papers (2024-08-05T08:35:59Z) - CMFDFormer: Transformer-based Copy-Move Forgery Detection with Continual
Learning [52.72888626663642]
Copy-move forgery detection aims at detecting duplicated regions in a suspected forged image.
Deep learning based copy-move forgery detection methods are in the ascendant.
We propose a Transformer-style copy-move forgery network named as CMFDFormer.
We also provide a novel PCSD continual learning framework to help CMFDFormer handle new tasks.
arXiv Detail & Related papers (2023-11-22T09:27:46Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - EMERSK -- Explainable Multimodal Emotion Recognition with Situational
Knowledge [0.0]
We present Explainable Multimodal Emotion Recognition with Situational Knowledge (EMERSK)
EMERSK is a general system for human emotion recognition and explanation using visual information.
Our system can handle multiple modalities, including facial expressions, posture, and gait in a flexible and modular manner.
arXiv Detail & Related papers (2023-06-14T17:52:37Z) - Learning Modular Structures That Generalize Out-of-Distribution [1.7034813545878589]
We describe a method for O.O.D. generalization that, through training, encourages models to only preserve features in the network that are well reused across multiple training domains.
Our method combines two complementary neuron-level regularizers with a probabilistic differentiable binary mask over the network, to extract a modular sub-network that achieves better O.O.D. performance than the original network.
arXiv Detail & Related papers (2022-08-07T15:54:19Z) - Multi-Branch Deep Radial Basis Function Networks for Facial Emotion
Recognition [80.35852245488043]
We propose a CNN based architecture enhanced with multiple branches formed by radial basis function (RBF) units.
RBF units capture local patterns shared by similar instances using an intermediate representation.
We show it is the incorporation of local information what makes the proposed model competitive.
arXiv Detail & Related papers (2021-09-07T21:05:56Z) - Exploiting Emotional Dependencies with Graph Convolutional Networks for
Facial Expression Recognition [31.40575057347465]
This paper proposes a novel multi-task learning framework to recognize facial expressions in-the-wild.
A shared feature representation is learned for both discrete and continuous recognition in a MTL setting.
The results of our experiments show that our method outperforms the current state-of-the-art methods on discrete FER.
arXiv Detail & Related papers (2021-06-07T10:20:05Z) - Face Anti-Spoofing with Human Material Perception [76.4844593082362]
Face anti-spoofing (FAS) plays a vital role in securing the face recognition systems from presentation attacks.
We rephrase face anti-spoofing as a material recognition problem and combine it with classical human material perception.
We propose the Bilateral Convolutional Networks (BCN), which is able to capture intrinsic material-based patterns.
arXiv Detail & Related papers (2020-07-04T18:25:53Z) - A Transductive Multi-Head Model for Cross-Domain Few-Shot Learning [72.30054522048553]
We present a new method, Transductive Multi-Head Few-Shot learning (TMHFS), to address the Cross-Domain Few-Shot Learning challenge.
The proposed methods greatly outperform the strong baseline, fine-tuning, on four different target domains.
arXiv Detail & Related papers (2020-06-08T02:39:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.