Anomaly Detection-Based Unknown Face Presentation Attack Detection
- URL: http://arxiv.org/abs/2007.05856v1
- Date: Sat, 11 Jul 2020 21:20:55 GMT
- Title: Anomaly Detection-Based Unknown Face Presentation Attack Detection
- Authors: Yashasvi Baweja, Poojan Oza, Pramuditha Perera and Vishal M. Patel
- Abstract summary: Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
- Score: 74.4918294453537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anomaly detection-based spoof attack detection is a recent development in
face Presentation Attack Detection (fPAD), where a spoof detector is learned
using only non-attacked images of users. These detectors are of practical
importance as they are shown to generalize well to new attack types. In this
paper, we present a deep-learning solution for anomaly detection-based spoof
attack detection where both classifier and feature representations are learned
together end-to-end. First, we introduce a pseudo-negative class during
training in the absence of attacked images. The pseudo-negative class is
modeled using a Gaussian distribution whose mean is calculated by a weighted
running mean. Secondly, we use pairwise confusion loss to further regularize
the training process. The proposed approach benefits from the representation
learning power of the CNNs and learns better features for fPAD task as shown in
our ablation study. We perform extensive experiments on four publicly available
datasets: Replay-Attack, Rose-Youtu, OULU-NPU and Spoof in Wild to show the
effectiveness of the proposed approach over the previous methods. Code is
available at: \url{https://github.com/yashasvi97/IJCB2020_anomaly}
Related papers
- AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - ExAD: An Ensemble Approach for Explanation-based Adversarial Detection [17.455233006559734]
We propose ExAD, a framework to detect adversarial examples using an ensemble of explanation techniques.
We evaluate our approach using six state-of-the-art adversarial attacks on three image datasets.
arXiv Detail & Related papers (2021-03-22T00:53:07Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Detection of Iterative Adversarial Attacks via Counter Attack [4.549831511476249]
Deep neural networks (DNNs) have proven to be powerful tools for processing unstructured data.
For high-dimensional data, like images, they are inherently vulnerable to adversarial attacks.
In this work we outline a mathematical proof that the CW attack can be used as a detector itself.
arXiv Detail & Related papers (2020-09-23T21:54:36Z) - AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing
Flows [11.510009152620666]
We introduce AdvFlow: a novel black-box adversarial attack method on image classifiers.
We see that the proposed method generates adversaries that closely follow the clean data distribution, a property which makes their detection less likely.
arXiv Detail & Related papers (2020-07-15T02:13:49Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z) - Adversarial Detection and Correction by Matching Prediction
Distributions [0.0]
The detector almost completely neutralises powerful attacks like Carlini-Wagner or SLIDE on MNIST and Fashion-MNIST.
We show that our method is still able to detect the adversarial examples in the case of a white-box attack where the attacker has full knowledge of both the model and the defence.
arXiv Detail & Related papers (2020-02-21T15:45:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.