On the Generalisation Capabilities of Fisher Vector based Face
Presentation Attack Detection
- URL: http://arxiv.org/abs/2103.01721v1
- Date: Tue, 2 Mar 2021 13:49:06 GMT
- Title: On the Generalisation Capabilities of Fisher Vector based Face
Presentation Attack Detection
- Authors: L\'azaro J. Gonz\'alez-Soler, Marta Gomez-Barrero, Christoph Busch
- Abstract summary: Face Presentation Attack Detection techniques have reported a good detection performance when they are evaluated on known Presentation Attack Instruments.
In this work, we use a new feature space based on Fisher Vectors, computed from compact Binarised Statistical Image Features histograms.
This new representation, evaluated for challenging unknown attacks taken from freely available facial databases, shows promising results.
- Score: 13.93832810177247
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the last decades, the broad development experienced by biometric systems
has unveiled several threats which may decrease their trustworthiness. Those
are attack presentations which can be easily carried out by a non-authorised
subject to gain access to the biometric system. In order to mitigate those
security concerns, most face Presentation Attack Detection techniques have
reported a good detection performance when they are evaluated on known
Presentation Attack Instruments (PAI) and acquisition conditions, in contrast
to more challenging scenarios where unknown attacks are included in the test
set. For those more realistic scenarios, the existing algorithms face
difficulties to detect unknown PAI species in many cases. In this work, we use
a new feature space based on Fisher Vectors, computed from compact Binarised
Statistical Image Features histograms, which allow discovering semantic feature
subsets from known samples in order to enhance the detection of unknown
attacks. This new representation, evaluated for challenging unknown attacks
taken from freely available facial databases, shows promising results: a
BPCER100 under 17% together with an AUC over 98% can be achieved in the
presence of unknown attacks. In addition, by training a limited number of
parameters, our method is able to achieve state-of-the-art deep learning-based
approaches for cross-dataset scenarios.
Related papers
- Meta-Learning Approaches for Improving Detection of Unseen Speech Deepfakes [9.894633583748895]
Current speech deepfake detection approaches perform satisfactorily against known adversaries.
The proliferation of speech deepfakes on social media underscores the need for systems that can generalize to unseen attacks.
We address this problem from the perspective of meta-learning, aiming to learn attack-invariant features to adapt to unseen attacks with very few samples available.
arXiv Detail & Related papers (2024-10-27T20:14:32Z) - Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Corpus Poisoning via Approximate Greedy Gradient Descent [48.5847914481222]
We propose Approximate Greedy Gradient Descent, a new attack on dense retrieval systems based on the widely used HotFlip method for generating adversarial passages.
We show that our method achieves a high attack success rate on several datasets and using several retrievers, and can generalize to unseen queries and new domains.
arXiv Detail & Related papers (2024-06-07T17:02:35Z) - Robust Adversarial Attacks Detection for Deep Learning based Relative
Pose Estimation for Space Rendezvous [8.191688622709444]
We propose a novel approach for adversarial attack detection for deep neural network-based relative pose estimation schemes.
The proposed adversarial attack detector achieves a detection accuracy of 99.21%.
arXiv Detail & Related papers (2023-11-10T11:07:31Z) - Attacking Face Recognition with T-shirts: Database, Vulnerability
Assessment and Detection [0.0]
We propose a new T-shirt Face Presentation Attack database of 1,608 T-shirt attacks using 100 unique presentation attack instruments.
We show that this type of attack can compromise the security of face recognition systems and that some state-of-the-art attack detection mechanisms fail to robustly generalize to the new attacks.
arXiv Detail & Related papers (2022-11-14T14:11:23Z) - Random Projections for Adversarial Attack Detection [8.684378639046644]
adversarial attack detection remains a fundamentally challenging problem from two perspectives.
We present a technique that makes use of special properties of random projections, whereby we can characterize the behavior of clean and adversarial examples.
Performance evaluation demonstrates that our technique outperforms ($>0.92$ AUC) competing state of the art (SOTA) attack strategies.
arXiv Detail & Related papers (2020-12-11T15:02:28Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - On the Generalisation Capabilities of Fingerprint Presentation Attack
Detection Methods in the Short Wave Infrared Domain [13.351759885287526]
Presentation attack detection methods are of utmost importance in order to distinguish between bona fide and attack presentations.
We evaluate the generalisability of multiple PAD algorithms on a dataset of 19,711 bona fide and 4,339 PA samples, including 45 different PAI species.
arXiv Detail & Related papers (2020-10-19T14:50:24Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.