Consistency Regularization for Deep Face Anti-Spoofing
- URL: http://arxiv.org/abs/2111.12320v2
- Date: Thu, 25 Nov 2021 09:14:59 GMT
- Title: Consistency Regularization for Deep Face Anti-Spoofing
- Authors: Zezheng Wang, Zitong Yu, Xun Wang, Yunxiao Qin, Jiahong Li, Chenxu
Zhao, Zhen Lei, Xin Liu, Size Li, Zhongyuan Wang
- Abstract summary: Face anti-spoofing (FAS) plays a crucial role in securing face recognition systems.
Motivated by this exciting observation, we conjecture that encouraging feature consistency of different views may be a promising way to boost FAS models.
We enhance both Embedding-level and Prediction-level Consistency Regularization (EPCR) in FAS.
- Score: 69.70647782777051
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Face anti-spoofing (FAS) plays a crucial role in securing face recognition
systems. Empirically, given an image, a model with more consistent output on
different views of this image usually performs better, as shown in Fig.1.
Motivated by this exciting observation, we conjecture that encouraging feature
consistency of different views may be a promising way to boost FAS models. In
this paper, we explore this way thoroughly by enhancing both Embedding-level
and Prediction-level Consistency Regularization (EPCR) in FAS. Specifically, at
the embedding-level, we design a dense similarity loss to maximize the
similarities between all positions of two intermediate feature maps in a
self-supervised fashion; while at the prediction-level, we optimize the mean
square error between the predictions of two views. Notably, our EPCR is free of
annotations and can directly integrate into semi-supervised learning schemes.
Considering different application scenarios, we further design five diverse
semi-supervised protocols to measure semi-supervised FAS techniques. We conduct
extensive experiments to show that EPCR can significantly improve the
performance of several supervised and semi-supervised tasks on benchmark
datasets. The codes and protocols will be released at
https://github.com/clks-wzz/EPCR.
Related papers
- Robust Scene Change Detection Using Visual Foundation Models and Cross-Attention Mechanisms [27.882122236282054]
We present a novel method for scene change detection that leverages the robust feature extraction capabilities of a visual foundational model, DINOv2.
We evaluate our approach on two benchmark datasets, VL-CMU-CD and PSCD, along with their viewpoint-varied versions.
Our experiments demonstrate significant improvements in F1-score, particularly in scenarios involving geometric changes between image pairs.
arXiv Detail & Related papers (2024-09-25T11:55:27Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Learnable Multi-level Frequency Decomposition and Hierarchical Attention
Mechanism for Generalized Face Presentation Attack Detection [7.324459578044212]
Face presentation attack detection (PAD) is attracting a lot of attention and playing a key role in securing face recognition systems.
We propose a dual-stream convolution neural networks (CNNs) framework to deal with unseen scenarios.
We successfully prove the design of our proposed PAD solution in a step-wise ablation study.
arXiv Detail & Related papers (2021-09-16T13:06:43Z) - Self-supervised Multi-view Stereo via Effective Co-Segmentation and
Data-Augmentation [39.95831985522991]
We propose a framework integrated with more reliable supervision guided by semantic co-segmentation and data-augmentation.
Our proposed methods achieve the state-of-the-art performance among unsupervised methods, and even compete on par with supervised methods.
arXiv Detail & Related papers (2021-04-12T11:48:54Z) - Inter-class Discrepancy Alignment for Face Recognition [55.578063356210144]
We propose a unified framework calledInter-class DiscrepancyAlignment(IDA)
IDA-DAO is used to align the similarity scores considering the discrepancy between the images and its neighbors.
IDA-SSE can provide convincing inter-class neighbors by introducing virtual candidate images generated with GAN.
arXiv Detail & Related papers (2021-03-02T08:20:08Z) - Revisiting Pixel-Wise Supervision for Face Anti-Spoofing [75.89648108213773]
Face anti-spoofing (FAS) plays a vital role in securing face recognition systems from presentation attacks (PAs)
Traditionally, deep models supervised by binary loss are weak in describing intrinsic and discriminative spoofing patterns.
Recent, pixel-wise supervision has been proposed for the FAS task, intending to provide more fine-grained pixel/patch-level cues.
arXiv Detail & Related papers (2020-11-24T11:25:58Z) - Self-supervised Equivariant Attention Mechanism for Weakly Supervised
Semantic Segmentation [93.83369981759996]
We propose a self-supervised equivariant attention mechanism (SEAM) to discover additional supervision and narrow the gap.
Our method is based on the observation that equivariance is an implicit constraint in fully supervised semantic segmentation.
We propose consistency regularization on predicted CAMs from various transformed images to provide self-supervision for network learning.
arXiv Detail & Related papers (2020-04-09T14:57:57Z) - Deep Semantic Matching with Foreground Detection and Cycle-Consistency [103.22976097225457]
We address weakly supervised semantic matching based on a deep network.
We explicitly estimate the foreground regions to suppress the effect of background clutter.
We develop cycle-consistent losses to enforce the predicted transformations across multiple images to be geometrically plausible and consistent.
arXiv Detail & Related papers (2020-03-31T22:38:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.