Myope Models -- Are face presentation attack detection models
short-sighted?
- URL: http://arxiv.org/abs/2111.11127v1
- Date: Mon, 22 Nov 2021 11:28:44 GMT
- Title: Myope Models -- Are face presentation attack detection models
short-sighted?
- Authors: Pedro C. Neto, Ana F. Sequeira, Jaime S. Cardoso
- Abstract summary: Presentation attacks are recurrent threats to biometric systems, where impostors attempt to bypass these systems.
This work presents a comparative study of face presentation attack detection (PAD) models with and without crops.
The results show that the performance is consistently better when the background is present in the images.
- Score: 3.4376560669160394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Presentation attacks are recurrent threats to biometric systems, where
impostors attempt to bypass these systems. Humans often use background
information as contextual cues for their visual system. Yet, regarding
face-based systems, the background is often discarded, since face presentation
attack detection (PAD) models are mostly trained with face crops. This work
presents a comparative study of face PAD models (including multi-task learning,
adversarial training and dynamic frame selection) in two settings: with and
without crops. The results show that the performance is consistently better
when the background is present in the images. The proposed multi-task
methodology beats the state-of-the-art results on the ROSE-Youtu dataset by a
large margin with an equal error rate of 0.2%. Furthermore, we analyze the
models' predictions with Grad-CAM++ with the aim to investigate to what extent
the models focus on background elements that are known to be useful for human
inspection. From this analysis we can conclude that the background cues are not
relevant across all the attacks. Thus, showing the capability of the model to
leverage the background information only when necessary.
Related papers
- Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - FACE-AUDITOR: Data Auditing in Facial Recognition Systems [24.082527732931677]
Few-shot-based facial recognition systems have gained increasing attention due to their scalability and ability to work with a few face images.
To prevent the face images from being misused, one straightforward approach is to modify the raw face images before sharing them.
We propose a complete toolkit FACE-AUDITOR that can query the few-shot-based facial recognition model and determine whether any of a user's face images is used in training the model.
arXiv Detail & Related papers (2023-04-05T23:03:54Z) - PoseExaminer: Automated Testing of Out-of-Distribution Robustness in
Human Pose and Shape Estimation [15.432266117706018]
We develop a simulator that can be controlled in a fine-grained manner to explore the manifold of images of human pose.
We introduce a learning-based testing method, termed PoseExaminer, that automatically diagnoses HPS algorithms.
We show that our PoseExaminer discovers a variety of limitations in current state-of-the-art models that are relevant in real-world scenarios.
arXiv Detail & Related papers (2023-03-13T17:58:54Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Quantifying and Mitigating Privacy Risks of Contrastive Learning [4.909548818641602]
We perform the first privacy analysis of contrastive learning through the lens of membership inference and attribute inference.
Our results show that contrastive models are less vulnerable to membership inference attacks but more vulnerable to attribute inference attacks compared to supervised models.
To remedy this situation, we propose the first privacy-preserving contrastive learning mechanism, namely Talos.
arXiv Detail & Related papers (2021-02-08T11:38:11Z) - Models, Pixels, and Rewards: Evaluating Design Trade-offs in Visual
Model-Based Reinforcement Learning [109.74041512359476]
We study a number of design decisions for the predictive model in visual MBRL algorithms.
We find that a range of design decisions that are often considered crucial, such as the use of latent spaces, have little effect on task performance.
We show how this phenomenon is related to exploration and how some of the lower-scoring models on standard benchmarks will perform the same as the best-performing models when trained on the same training data.
arXiv Detail & Related papers (2020-12-08T18:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.