Distributional Estimation of Data Uncertainty for Surveillance Face
Anti-spoofing
- URL: http://arxiv.org/abs/2309.09485v1
- Date: Mon, 18 Sep 2023 04:48:24 GMT
- Title: Distributional Estimation of Data Uncertainty for Surveillance Face
Anti-spoofing
- Authors: Mouxiao Huang
- Abstract summary: Face Anti-spoofing (FAS) can protect against various types of attacks, such as phone unlocking, face payment, and self-service security inspection.
This work proposes Distributional Estimation (DisE), a method that converts traditional FAS point estimation to distributional estimation by modeling data uncertainty.
DisE achieves comparable performance on both ACER and AUC metrics.
- Score: 0.5439020425819
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition systems have become increasingly vulnerable to security
threats in recent years, prompting the use of Face Anti-spoofing (FAS) to
protect against various types of attacks, such as phone unlocking, face
payment, and self-service security inspection. While FAS has demonstrated its
effectiveness in traditional settings, securing it in long-distance
surveillance scenarios presents a significant challenge. These scenarios often
feature low-quality face images, necessitating the modeling of data uncertainty
to improve stability under extreme conditions. To address this issue, this work
proposes Distributional Estimation (DisE), a method that converts traditional
FAS point estimation to distributional estimation by modeling data uncertainty
during training, including feature (mean) and uncertainty (variance). By
adjusting the learning strength of clean and noisy samples for stability and
accuracy, the learned uncertainty enhances DisE's performance. The method is
evaluated on SuHiFiMask [1], a large-scale and challenging FAS dataset in
surveillance scenarios. Results demonstrate that DisE achieves comparable
performance on both ACER and AUC metrics.
Related papers
- ErasableMask: A Robust and Erasable Privacy Protection Scheme against Black-box Face Recognition Models [14.144010156851273]
We propose ErasableMask, a robust and erasable privacy protection scheme against black-box FR models.
Specifically, ErasableMask introduces a novel meta-auxiliary attack, which boosts black-box transferability.
It also offers a perturbation erasion mechanism that supports the erasion of semantic perturbations in protected face without degrading image quality.
arXiv Detail & Related papers (2024-12-22T14:30:26Z) - Fed-AugMix: Balancing Privacy and Utility via Data Augmentation [15.325493418326117]
Gradient leakage attacks pose a significant threat to the privacy guarantees of federated learning.
We propose a novel data augmentation-based framework designed to achieve a favorable privacy-utility trade-off.
Our framework incorporates the AugMix algorithm at the client level, enabling data augmentation with controllable severity.
arXiv Detail & Related papers (2024-12-18T13:05:55Z) - Confidence Aware Learning for Reliable Face Anti-spoofing [52.23271636362843]
We propose a Confidence Aware Face Anti-spoofing model, which is aware of its capability boundary.
We estimate its confidence during the prediction of each sample.
Experiments show that the proposed CA-FAS can effectively recognize samples with low prediction confidence.
arXiv Detail & Related papers (2024-11-02T14:29:02Z) - On the Robustness of Adversarial Training Against Uncertainty Attacks [9.180552487186485]
In learning problems, the noise inherent to the task at hand hinders the possibility to infer without a certain degree of uncertainty.
In this work, we reveal both empirically and theoretically that defending against adversarial examples, i.e., carefully perturbed samples that cause misclassification, guarantees a more secure, trustworthy uncertainty estimate.
To support our claims, we evaluate multiple adversarial-robust models from the publicly available benchmark RobustBench on the CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2024-10-29T11:12:44Z) - Extreme Miscalibration and the Illusion of Adversarial Robustness [66.29268991629085]
Adversarial Training is often used to increase model robustness.
We show that this observed gain in robustness is an illusion of robustness (IOR)
We urge the NLP community to incorporate test-time temperature scaling into their robustness evaluations.
arXiv Detail & Related papers (2024-02-27T13:49:12Z) - Continual Face Forgery Detection via Historical Distribution Preserving [88.66313037412846]
We focus on a novel and challenging problem: Continual Face Forgery Detection (CFFD)
CFFD aims to efficiently learn from new forgery attacks without forgetting previous ones.
Our experiments on the benchmarks show that our method outperforms the state-of-the-art competitors.
arXiv Detail & Related papers (2023-08-11T16:37:31Z) - Toward Reliable Human Pose Forecasting with Uncertainty [51.628234388046195]
We develop an open-source library for human pose forecasting, including multiple models, supporting several datasets.
We devise two types of uncertainty in the problem to increase performance and convey better trust.
arXiv Detail & Related papers (2023-04-13T17:56:08Z) - Confidence-Calibrated Face and Kinship Verification [8.570969129199467]
We introduce an effective confidence measure that allows verification models to convert a similarity score into a confidence score for any given face pair.
We also propose a confidence-calibrated approach, termed Angular Scaling (ASC), which is easy to implement and can be readily applied to existing verification models.
To the best of our knowledge, our work presents the first comprehensive confidence-calibrated solution for modern face and kinship verification tasks.
arXiv Detail & Related papers (2022-10-25T10:43:46Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - Reliable Probabilistic Face Embeddings in the Wild [5.094757968178221]
Probabilistic Face Embeddings (PFE) can improve face recognition performance in unconstrained scenarios.
PFE methods tend to be over-confident in estimating uncertainty and is too slow to apply to large-scale face matching.
This paper proposes a regularized probabilistic face embedding method to improve the robustness and speed of PFE.
arXiv Detail & Related papers (2021-02-08T09:27:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.