Learning One Class Representations for Face Presentation Attack
Detection using Multi-channel Convolutional Neural Networks
- URL: http://arxiv.org/abs/2007.11457v1
- Date: Wed, 22 Jul 2020 14:19:33 GMT
- Title: Learning One Class Representations for Face Presentation Attack
Detection using Multi-channel Convolutional Neural Networks
- Authors: Anjith George and Sebastien Marcel
- Abstract summary: presentation attack detection (PAD) methods often fail in generalizing to unseen attacks.
We propose a new framework for PAD using a one-class classifier, where the representation used is learned with a Multi-Channel Convolutional Neural Network (MCCNN)
A novel loss function is introduced, which forces the network to learn a compact embedding for bonafide class while being far from the representation of attacks.
The proposed framework introduces a novel approach to learn a robust PAD system from bonafide and available (known) attack classes.
- Score: 7.665392786787577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition has evolved as a widely used biometric modality. However,
its vulnerability against presentation attacks poses a significant security
threat. Though presentation attack detection (PAD) methods try to address this
issue, they often fail in generalizing to unseen attacks. In this work, we
propose a new framework for PAD using a one-class classifier, where the
representation used is learned with a Multi-Channel Convolutional Neural
Network (MCCNN). A novel loss function is introduced, which forces the network
to learn a compact embedding for bonafide class while being far from the
representation of attacks. A one-class Gaussian Mixture Model is used on top of
these embeddings for the PAD task. The proposed framework introduces a novel
approach to learn a robust PAD system from bonafide and available (known)
attack classes. This is particularly important as collecting bonafide data and
simpler attacks are much easier than collecting a wide variety of expensive
attacks. The proposed system is evaluated on the publicly available WMCA
multi-channel face PAD database, which contains a wide variety of 2D and 3D
attacks. Further, we have performed experiments with MLFP and SiW-M datasets
using RGB channels only. Superior performance in unseen attack protocols shows
the effectiveness of the proposed approach. Software, data, and protocols to
reproduce the results are made available publicly.
Related papers
- Hyperbolic Face Anti-Spoofing [21.981129022417306]
We propose to learn richer hierarchical and discriminative spoofing cues in hyperbolic space.
For unimodal FAS learning, the feature embeddings are projected into the Poincar'e ball, and then the hyperbolic binary logistic regression layer is cascaded for classification.
To alleviate the vanishing gradient problem in hyperbolic space, a new feature clipping method is proposed to enhance the training stability of hyperbolic models.
arXiv Detail & Related papers (2023-08-17T17:18:21Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Generalized Insider Attack Detection Implementation using NetFlow Data [0.6236743421605786]
We study an approach centered on using network data to identify attacks.
Our work builds on unsupervised machine learning techniques such as One-Class SVM and bi-clustering.
We show that our approach is a promising tool for insider attack detection in realistic settings.
arXiv Detail & Related papers (2020-10-27T14:00:31Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - Leveraging Siamese Networks for One-Shot Intrusion Detection Model [0.0]
Supervised Machine Learning (ML) to enhance Intrusion Detection Systems has been the subject of significant research.
retraining the models in-situ renders the network susceptible to attacks owing to the time-window required to acquire a sufficient volume of data.
Here, a complementary approach referred to as 'One-Shot Learning', whereby a limited number of examples of a new attack-class is used to identify a new attack-class.
A Siamese Network is trained to differentiate between classes based on pairs similarities, rather than features, allowing to identify new and previously unseen attacks.
arXiv Detail & Related papers (2020-06-27T11:40:01Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.