Unseen Face Presentation Attack Detection Using Class-Specific Sparse
One-Class Multiple Kernel Fusion Regression
- URL: http://arxiv.org/abs/1912.13276v1
- Date: Tue, 31 Dec 2019 11:53:20 GMT
- Title: Unseen Face Presentation Attack Detection Using Class-Specific Sparse
One-Class Multiple Kernel Fusion Regression
- Authors: Shervin Rahimzadeh Arashloo
- Abstract summary: The paper addresses face presentation attack detection in the challenging conditions of an unseen attack scenario.
A pure one-class face presentation attack detection approach based on kernel regression is developed.
- Score: 15.000818334408802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper addresses face presentation attack detection in the challenging
conditions of an unseen attack scenario where the system is exposed to novel
presentation attacks that were not present in the training step. For this
purpose, a pure one-class face presentation attack detection approach based on
kernel regression is developed which only utilises bona fide (genuine) samples
for training. In the context of the proposed approach, a number of innovations,
including multiple kernel fusion, client-specific modelling, sparse
regularisation and probabilistic modelling of score distributions are
introduced to improve the efficacy of the method. The results of experimental
evaluations conducted on the OULU-NPU, Replay-Mobile, Replay-Attack and
MSU-MFSD datasets illustrate that the proposed method compares very favourably
with other methods operating in an unseen attack detection scenario while
achieving very competitive performance to multi-class methods (benefiting from
presentation attack data for training) despite using only bona fide samples for
training.
Related papers
- Boosting Out-of-Distribution Detection with Multiple Pre-trained Models [41.66566916581451]
Post hoc detection utilizing pre-trained models has shown promising performance and can be scaled to large-scale problems.
We propose a detection enhancement method by ensembling multiple detection decisions derived from a zoo of pre-trained models.
Our method substantially improves the relative performance by 65.40% and 26.96% on the CIFAR10 and ImageNet benchmarks.
arXiv Detail & Related papers (2022-12-24T12:11:38Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - ATRO: Adversarial Training with a Rejection Option [10.36668157679368]
This paper proposes a classification framework with a rejection option to mitigate the performance deterioration caused by adversarial examples.
Applying the adversarial training objective to both a classifier and a rejection function simultaneously, we can choose to abstain from classification when it has insufficient confidence to classify a test data point.
arXiv Detail & Related papers (2020-10-24T14:05:03Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - CSI: Novelty Detection via Contrastive Learning on Distributionally
Shifted Instances [77.28192419848901]
We propose a simple, yet effective method named contrasting shifted instances (CSI)
In addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations of itself.
Our experiments demonstrate the superiority of our method under various novelty detection scenarios.
arXiv Detail & Related papers (2020-07-16T08:32:56Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - Robust Ensemble Model Training via Random Layer Sampling Against
Adversarial Attack [38.1887818626171]
We propose an ensemble model training framework with random layer sampling to improve the robustness of deep neural networks.
In the proposed training framework, we generate various sampled model through the random layer sampling and update the weight of the sampled model.
After the ensemble models are trained, it can hide the gradient efficiently and avoid the gradient-based attack.
arXiv Detail & Related papers (2020-05-21T16:14:18Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z) - A Zero-Shot based Fingerprint Presentation Attack Detection System [8.676298469169174]
We propose a novel Zero-Shot Presentation Attack Detection Model to guarantee the generalization of the PAD model.
The proposed ZSPAD-Model based on generative model does not utilize any negative samples in the process of establishment.
In order to improve the performance of the proposed model, 9 confidence scores are discussed in this article.
arXiv Detail & Related papers (2020-02-12T10:52:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.