Deep convolutional neural networks for face and iris presentation attack
detection: Survey and case study
- URL: http://arxiv.org/abs/2004.12040v2
- Date: Wed, 29 Apr 2020 03:51:10 GMT
- Title: Deep convolutional neural networks for face and iris presentation attack
detection: Survey and case study
- Authors: Yomna Safaa El-Din, Mohamed N. Moustafa, Hani Mahdi
- Abstract summary: Cross-dataset evaluation on face PAD showed better generalization than state of the art.
We propose the use of a single deep network trained to detect both face and iris attacks.
- Score: 0.5801044612920815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biometric presentation attack detection is gaining increasing attention.
Users of mobile devices find it more convenient to unlock their smart
applications with finger, face or iris recognition instead of passwords. In
this paper, we survey the approaches presented in the recent literature to
detect face and iris presentation attacks. Specifically, we investigate the
effectiveness of fine tuning very deep convolutional neural networks to the
task of face and iris antispoofing. We compare two different fine tuning
approaches on six publicly available benchmark datasets. Results show the
effectiveness of these deep models in learning discriminative features that can
tell apart real from fake biometric images with very low error rate.
Cross-dataset evaluation on face PAD showed better generalization than state of
the art. We also performed cross-dataset testing on iris PAD datasets in terms
of equal error rate which was not reported in literature before. Additionally,
we propose the use of a single deep network trained to detect both face and
iris attacks. We have not noticed accuracy degradation compared to networks
trained for only one biometric separately. Finally, we analyzed the learned
features by the network, in correlation with the image frequency components, to
justify its prediction decision.
Related papers
- Presentation Attack detection using Wavelet Transform and Deep Residual
Neural Net [5.425986555749844]
Biometric substances can be deceived by the imposters in several ways.
The bio-metric images, especially the iris and face, are vulnerable to different presentation attacks.
This research applies deep learning approaches to mitigate presentation attacks in a biometric access control system.
arXiv Detail & Related papers (2023-11-23T20:21:49Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial
Detection [22.99930028876662]
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks.
Current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system.
We propose a simple and light-weight detector, which leverages recent findings on the relation between networks' local intrinsic dimensionality (LID) and adversarial attacks.
arXiv Detail & Related papers (2022-12-13T17:51:32Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - Improving DeepFake Detection Using Dynamic Face Augmentation [0.8793721044482612]
Most publicly available DeepFake detection datasets have limited variations.
Deep neural networks tend to overfit to the facial features instead of learning to detect manipulation features of DeepFake content.
We introduce Face-Cutout, a data augmentation method for training Convolutional Neural Networks (CNN) to improve DeepFake detection.
arXiv Detail & Related papers (2021-02-18T20:25:45Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Generalized Iris Presentation Attack Detection Algorithm under
Cross-Database Settings [63.90855798947425]
Presentation attacks pose major challenges to most of the biometric modalities.
We propose a generalized deep learning-based presentation attack detection network, MVANet.
It is inspired by the simplicity and success of hybrid algorithm or fusion of multiple detection networks.
arXiv Detail & Related papers (2020-10-25T22:42:27Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z) - Miss the Point: Targeted Adversarial Attack on Multiple Landmark
Detection [29.83857022733448]
This paper is the first to study how fragile a CNN-based model on multiple landmark detection to adversarial perturbations.
We propose a novel Adaptive Targeted Iterative FGSM attack against the state-of-the-art models in multiple landmark detection.
arXiv Detail & Related papers (2020-07-10T07:58:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.