EXPRESSNET: An Explainable Residual Slim Network for Fingerprint
Presentation Attack Detection
- URL: http://arxiv.org/abs/2305.09397v2
- Date: Tue, 6 Jun 2023 18:24:21 GMT
- Title: EXPRESSNET: An Explainable Residual Slim Network for Fingerprint
Presentation Attack Detection
- Authors: Anuj Rai, Somnath Dey
- Abstract summary: Presentation attack is a challenging issue that persists in the security of automatic fingerprint recognition systems.
This paper proposes a novel explainable residual slim network that detects the presentation attack by representing the visual features in the input fingerprint sample.
- Score: 3.6296396308298795
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Presentation attack is a challenging issue that persists in the security of
automatic fingerprint recognition systems. This paper proposes a novel
explainable residual slim network that detects the presentation attack by
representing the visual features in the input fingerprint sample. The
encoder-decoder of this network along with the channel attention block converts
the input sample into its heatmap representation while the modified residual
convolutional neural network classifier discriminates between live and spoof
fingerprints. The entire architecture of the heatmap generator block and
modified ResNet classifier works together in an end-to-end manner. The
performance of the proposed model is validated on benchmark liveness detection
competition databases i.e. Livdet 2011, 2013, 2015, 2017, and 2019 and the
classification accuracy of 96.86\%, 99.84\%, 96.45\%, 96.07\%, 96.27\% are
achieved on them, respectively. The performance of the proposed model is
compared with the state-of-the-art techniques, and the proposed method
outperforms state-of-the-art methods in benchmark protocols of presentation
attack detection in terms of classification accuracy.
Related papers
- DyFFPAD: Dynamic Fusion of Convolutional and Handcrafted Features for Fingerprint Presentation Attack Detection [1.9573380763700712]
A presentation attack can be performed by creating a spoof of a user's fingerprint with or without their consent.
This paper presents a dynamic ensemble of deep CNN and handcrafted features to detect presentation attacks.
We have validated our proposed method on benchmark databases from the Liveness Detection Competition.
arXiv Detail & Related papers (2023-08-19T13:46:49Z) - CONVERT:Contrastive Graph Clustering with Reliable Augmentation [110.46658439733106]
We propose a novel CONtrastiVe Graph ClustEring network with Reliable AugmenTation (CONVERT)
In our method, the data augmentations are processed by the proposed reversible perturb-recover network.
To further guarantee the reliability of semantics, a novel semantic loss is presented to constrain the network.
arXiv Detail & Related papers (2023-08-17T13:07:09Z) - An Open Patch Generator based Fingerprint Presentation Attack Detection
using Generative Adversarial Network [3.5558308387389626]
Presentation Attack (PA) or spoofing is one of the threats caused by presenting a spoof of a genuine fingerprint to the sensor of Automatic Fingerprint Recognition Systems (AFRS)
This paper proposes a CNN based technique that uses a Generative Adversarial Network (GAN) to augment the dataset with spoof samples generated from the proposed Open Patch Generator (OPG)
An overall accuracy of 96.20%, 94.97%, and 92.90% has been achieved on the LivDet 2015, 2017, and 2019 databases, respectively under the LivDet protocol scenarios.
arXiv Detail & Related papers (2023-06-06T10:52:06Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - MoSFPAD: An end-to-end Ensemble of MobileNet and Support Vector
Classifier for Fingerprint Presentation Attack Detection [2.733700237741334]
This paper proposes a novel endtoend model to detect fingerprint attacks.
The proposed model incorporates MobileNet as a feature extractor and a Support Vector as a classifier.
The performance of the proposed model is compared with state-of-the-art methods.
arXiv Detail & Related papers (2023-03-02T18:27:48Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - UC-Net: Uncertainty Inspired RGB-D Saliency Detection via Conditional
Variational Autoencoders [81.5490760424213]
We propose the first framework (UCNet) to employ uncertainty for RGB-D saliency detection by learning from the data labeling process.
Inspired by the saliency data labeling process, we propose probabilistic RGB-D saliency detection network.
arXiv Detail & Related papers (2020-04-13T04:12:59Z) - Non-Intrusive Detection of Adversarial Deep Learning Attacks via
Observer Networks [5.4572790062292125]
Recent studies have shown that deep learning models are vulnerable to crafted adversarial inputs.
We propose a novel method to detect adversarial inputs by augmenting the main classification network with multiple binary detectors.
We achieve a 99.5% detection accuracy on the MNIST dataset and 97.5% on the CIFAR-10 dataset.
arXiv Detail & Related papers (2020-02-22T21:13:00Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z) - Detection Method Based on Automatic Visual Shape Clustering for
Pin-Missing Defect in Transmission Lines [1.602803566465659]
Bolts are the most numerous fasteners in transmission lines and are prone to losing their split pins.
How to realize the automatic pin-missing defect detection for bolts in transmission lines so as to achieve timely and efficient trouble shooting is a difficult problem.
In this paper, an automatic detection model called Automatic Visual Shape Clustering Network (AVSCNet) for pin-missing defect is constructed.
arXiv Detail & Related papers (2020-01-17T10:57:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.