On Improving Temporal Consistency for Online Face Liveness Detection
- URL: http://arxiv.org/abs/2006.06756v1
- Date: Thu, 11 Jun 2020 19:19:47 GMT
- Title: On Improving Temporal Consistency for Online Face Liveness Detection
- Authors: Xiang Xu and Yuanjun Xiong and Wei Xia
- Abstract summary: We focus on improving the online face liveness detection system to enhance the security of the downstream face recognition system.
To address the issue, a simple yet effective solution based on temporal consistency is proposed.
In the training stage, to integrate the temporal consistency constraint, a temporal self-supervision loss and a class consistency loss are proposed.
In the deployment stage, a training-free non-parametric uncertainty estimation module is developed to smooth the predictions adaptively.
- Score: 43.3347240592507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we focus on improving the online face liveness detection
system to enhance the security of the downstream face recognition system. Most
of the existing frame-based methods are suffering from the prediction
inconsistency across time. To address the issue, a simple yet effective
solution based on temporal consistency is proposed. Specifically, in the
training stage, to integrate the temporal consistency constraint, a temporal
self-supervision loss and a class consistency loss are proposed in addition to
the softmax cross-entropy loss. In the deployment stage, a training-free
non-parametric uncertainty estimation module is developed to smooth the
predictions adaptively. Beyond the common evaluation approach, a video
segment-based evaluation is proposed to accommodate more practical scenarios.
Extensive experiments demonstrated that our solution is more robust against
several presentation attacks in various scenarios, and significantly
outperformed the state-of-the-art on multiple public datasets by at least 40%
in terms of ACER. Besides, with much less computational complexity (33% fewer
FLOPs), it provides great potential for low-latency online applications.
Related papers
- Fast and Stable Diffusion Planning through Variational Adaptive Weighting [3.745003761050674]
Diffusion models have recently shown promise in offline RL.<n>These methods often suffer from high training costs and slow convergence.<n>We introduce a closed-form approximation method for its online estimation under the flow-based generative modeling framework.<n> Experimental results on Maze2D and Kitchen tasks show that our method achieves competitive performance with up to 10 times fewer training steps.
arXiv Detail & Related papers (2025-06-20T02:12:04Z) - How to Leverage Predictive Uncertainty Estimates for Reducing Catastrophic Forgetting in Online Continual Learning [12.33899500566626]
This work presents an in-depth analysis of different uncertainty estimates and strategies for populating the memory.
We propose an alternative method for estimating predictive uncertainty via the generalised variance induced by the negative log-likelihood.
We demonstrate that the use of predictive uncertainty measures helps in reducing CF in different settings.
arXiv Detail & Related papers (2024-07-10T13:51:15Z) - Federated Continual Learning Goes Online: Uncertainty-Aware Memory Management for Vision Tasks and Beyond [13.867793835583463]
We propose an uncertainty-aware memory-based approach to solve catastrophic forgetting.
We retrieve samples with specific characteristics, and - by retraining the model on such samples - we demonstrate the potential of this approach.
arXiv Detail & Related papers (2024-05-29T09:29:39Z) - Improved Online Conformal Prediction via Strongly Adaptive Online
Learning [86.4346936885507]
We develop new online conformal prediction methods that minimize the strongly adaptive regret.
We prove that our methods achieve near-optimal strongly adaptive regret for all interval lengths simultaneously.
Experiments show that our methods consistently obtain better coverage and smaller prediction sets than existing methods on real-world tasks.
arXiv Detail & Related papers (2023-02-15T18:59:30Z) - Uncertainty-aware LiDAR Panoptic Segmentation [21.89063036529791]
We introduce a novel approach for solving the task of uncertainty-aware panoptic segmentation using LiDAR point clouds.
Our proposed EvLPSNet network is the first to solve this task efficiently in a sampling-free manner.
We provide several strong baselines combining state-of-the-art panoptic segmentation networks with sampling-free uncertainty estimation techniques.
arXiv Detail & Related papers (2022-10-10T07:54:57Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via
Online High-Confidence Change-Point Detection [7.685002911021767]
We introduce an algorithm that efficiently learns policies in non-stationary environments.
It analyzes a possibly infinite stream of data and computes, in real-time, high-confidence change-point detection statistics.
We show that (i) this algorithm minimizes the delay until unforeseen changes to a context are detected, thereby allowing for rapid responses.
arXiv Detail & Related papers (2021-05-20T01:57:52Z) - Real-Time Uncertainty Estimation in Computer Vision via
Uncertainty-Aware Distribution Distillation [18.712408359052667]
We propose a simple, easy-to-optimize distillation method for learning the conditional predictive distribution of a pre-trained dropout model.
We empirically test the effectiveness of the proposed method on both semantic segmentation and depth estimation tasks.
arXiv Detail & Related papers (2020-07-31T05:40:39Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Optimizing for the Future in Non-Stationary MDPs [52.373873622008944]
We present a policy gradient algorithm that maximizes a forecast of future performance.
We show that our algorithm, called Prognosticator, is more robust to non-stationarity than two online adaptation techniques.
arXiv Detail & Related papers (2020-05-17T03:41:19Z) - Unsupervised Domain Adaptation in Person re-ID via k-Reciprocal
Clustering and Large-Scale Heterogeneous Environment Synthesis [76.46004354572956]
We introduce an unsupervised domain adaptation approach for person re-identification.
Experimental results show that the proposed ktCUDA and SHRED approach achieves an average improvement of +5.7 mAP in re-identification performance.
arXiv Detail & Related papers (2020-01-14T17:43:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.