DFCANet: Dense Feature Calibration-Attention Guided Network for Cross
Domain Iris Presentation Attack Detection
- URL: http://arxiv.org/abs/2111.00919v1
- Date: Mon, 1 Nov 2021 13:04:23 GMT
- Title: DFCANet: Dense Feature Calibration-Attention Guided Network for Cross
Domain Iris Presentation Attack Detection
- Authors: Gaurav Jaswal, Aman Verma, Sumantra Dutta Roy, Raghavendra Ramachandra
- Abstract summary: iris presentation attack detection (IPAD) is essential for securing personal identity.
Existing IPAD algorithms do not generalize well to unseen and cross-domain scenarios.
This paper proposes DFCANet: Dense Feature and Attention Guided Network.
- Score: 2.95102708174421
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: An iris presentation attack detection (IPAD) is essential for securing
personal identity is widely used iris recognition systems. However, the
existing IPAD algorithms do not generalize well to unseen and cross-domain
scenarios because of capture in unconstrained environments and high visual
correlation amongst bonafide and attack samples. These similarities in
intricate textural and morphological patterns of iris ocular images contribute
further to performance degradation. To alleviate these shortcomings, this paper
proposes DFCANet: Dense Feature Calibration and Attention Guided Network which
calibrates the locally spread iris patterns with the globally located ones.
Uplifting advantages from feature calibration convolution and residual
learning, DFCANet generates domain-specific iris feature representations. Since
some channels in the calibrated feature maps contain more prominent
information, we capitalize discriminative feature learning across the channels
through the channel attention mechanism. In order to intensify the challenge
for our proposed model, we make DFCANet operate over nonsegmented and
non-normalized ocular iris images. Extensive experimentation conducted over
challenging cross-domain and intra-domain scenarios highlights consistent
outperforming results. Compared to state-of-the-art methods, DFCANet achieves
significant gains in performance for the benchmark IIITD CLI, IIIT CSD and
NDCLD13 databases respectively. Further, a novel incremental learning-based
methodology has been introduced so as to overcome disentangled iris-data
characteristics and data scarcity. This paper also pursues the challenging
scenario that considers soft-lens under the attack category with evaluation
performed under various cross-domain protocols. The code will be made publicly
available.
Related papers
- S-Adapter: Generalizing Vision Transformer for Face Anti-Spoofing with Statistical Tokens [45.06704981913823]
Face Anti-Spoofing (FAS) aims to detect malicious attempts to invade a face recognition system by presenting spoofed faces.
We propose a novel Statistical Adapter (S-Adapter) that gathers local discriminative and statistical information from localized token histograms.
To further improve the generalization of the statistical tokens, we propose a novel Token Style Regularization (TSR)
Our experimental results demonstrate that our proposed S-Adapter and TSR provide significant benefits in both zero-shot and few-shot cross-domain testing, outperforming state-of-the-art methods on several benchmark tests.
arXiv Detail & Related papers (2023-09-07T22:36:22Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - A Global and Patch-wise Contrastive Loss for Accurate Automated Exudate
Detection [12.669734891001667]
Diabetic retinopathy (DR) is a leading global cause of blindness.
Early detection of hard exudates plays a crucial role in identifying DR, which aids in treating diabetes and preventing vision loss.
We present a novel supervised contrastive learning framework to optimize hard exudate segmentation.
arXiv Detail & Related papers (2023-02-22T17:39:00Z) - Intra and Cross-spectrum Iris Presentation Attack Detection in the NIR
and Visible Domains Using Attention-based and Pixel-wise Supervised Learning [8.981081097203088]
Iris Presentation Attack Detection (PAD) is essential to secure iris recognition systems.
Recent iris PAD solutions achieved good performance by leveraging deep learning techniques.
This chapter presents a novel attention-based deep pixel-wise binary supervision (A-PBS) method.
arXiv Detail & Related papers (2022-05-05T11:12:59Z) - Toward Accurate and Reliable Iris Segmentation Using Uncertainty
Learning [96.72850130126294]
We propose an Iris U-transformer (IrisUsformer) for accurate and reliable iris segmentation.
For better accuracy, we elaborately design IrisUsformer by adopting position-sensitive operation and re-packaging transformer block.
We show that IrisUsformer achieves better segmentation accuracy using 35% MACs of the SOTA IrisParseNet.
arXiv Detail & Related papers (2021-10-20T01:37:19Z) - Learnable Multi-level Frequency Decomposition and Hierarchical Attention
Mechanism for Generalized Face Presentation Attack Detection [7.324459578044212]
Face presentation attack detection (PAD) is attracting a lot of attention and playing a key role in securing face recognition systems.
We propose a dual-stream convolution neural networks (CNNs) framework to deal with unseen scenarios.
We successfully prove the design of our proposed PAD solution in a step-wise ablation study.
arXiv Detail & Related papers (2021-09-16T13:06:43Z) - Hierarchical Deep CNN Feature Set-Based Representation Learning for
Robust Cross-Resolution Face Recognition [59.29808528182607]
Cross-resolution face recognition (CRFR) is important in intelligent surveillance and biometric forensics.
Existing shallow learning-based and deep learning-based methods focus on mapping the HR-LR face pairs into a joint feature space.
In this study, we desire to fully exploit the multi-level deep convolutional neural network (CNN) feature set for robust CRFR.
arXiv Detail & Related papers (2021-03-25T14:03:42Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - Generalized Iris Presentation Attack Detection Algorithm under
Cross-Database Settings [63.90855798947425]
Presentation attacks pose major challenges to most of the biometric modalities.
We propose a generalized deep learning-based presentation attack detection network, MVANet.
It is inspired by the simplicity and success of hybrid algorithm or fusion of multiple detection networks.
arXiv Detail & Related papers (2020-10-25T22:42:27Z) - Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
Ultrasound [74.22397862400177]
We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
arXiv Detail & Related papers (2020-10-19T13:56:22Z) - SIP-SegNet: A Deep Convolutional Encoder-Decoder Network for Joint
Semantic Segmentation and Extraction of Sclera, Iris and Pupil based on
Periocular Region Suppression [8.64118000141143]
multimodal biometric recognition systems have the ability to deal with the limitations of unimodal biometric systems.
Such systems possess high distinctiveness, permanence, and performance while, technologies based on other biometric traits can be easily compromised.
This work presents a novel deep learning framework called SIP-SegNet, which performs the joint semantic segmentation of ocular traits.
arXiv Detail & Related papers (2020-02-15T15:20:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.