Complex-valued Iris Recognition Network
- URL: http://arxiv.org/abs/2011.11198v4
- Date: Wed, 16 Feb 2022 04:17:35 GMT
- Title: Complex-valued Iris Recognition Network
- Authors: Kien Nguyen, Clinton Fookes, Sridha Sridharan, Arun Ross
- Abstract summary: We design a fully complex-valued neural network for the task of iris recognition.
We conduct experiments on three benchmark datasets - ND-CrossSensor-2013, CASIA-Iris-Thousand and UBIRIS.v2.
We exploit visualization schemes to convey how the complex-valued network, when compared to standard real-valued networks, extracts fundamentally different features from the iris texture.
- Score: 44.40424033688897
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this work, we design a fully complex-valued neural network for the task of
iris recognition. Unlike the problem of general object recognition, where
real-valued neural networks can be used to extract pertinent features, iris
recognition depends on the extraction of both phase and magnitude information
from the input iris texture in order to better represent its biometric content.
This necessitates the extraction and processing of phase information that
cannot be effectively handled by a real-valued neural network. In this regard,
we design a fully complex-valued neural network that can better capture the
multi-scale, multi-resolution, and multi-orientation phase and amplitude
features of the iris texture. We show a strong correspondence of the proposed
complex-valued iris recognition network with Gabor wavelets that are used to
generate the classical IrisCode; however, the proposed method enables a new
capability of automatic complex-valued feature learning that is tailored for
iris recognition. We conduct experiments on three benchmark datasets -
ND-CrossSensor-2013, CASIA-Iris-Thousand and UBIRIS.v2 - and show the benefit
of the proposed network for the task of iris recognition. We exploit
visualization schemes to convey how the complex-valued network, when compared
to standard real-valued networks, extracts fundamentally different features
from the iris texture.
Related papers
- EMWaveNet: Physically Explainable Neural Network Based on Microwave Propagation for SAR Target Recognition [4.251056028888424]
This study proposes a physically explainable framework for complex-valued SAR image recognition.
The network architecture is fully parameterized, with all learnable parameters with clear physical meanings, and the computational process is completed entirely in the frequency domain.
The results demonstrate that the proposed method possesses a strong physical decision logic, high physical explainability and robustness, as well as excellent dealiasing capabilities.
arXiv Detail & Related papers (2024-10-13T07:04:49Z) - Efficient Gesture Recognition on Spiking Convolutional Networks Through
Sensor Fusion of Event-Based and Depth Data [1.474723404975345]
This work proposes a Spiking Convolutional Neural Network, processing event- and depth data for gesture recognition.
The network is simulated using the open-source neuromorphic computing framework LAVA for offline training and evaluation on an embedded system.
arXiv Detail & Related papers (2024-01-30T14:42:35Z) - Exploring Deep Learning Image Super-Resolution for Iris Recognition [50.43429968821899]
We propose the use of two deep learning single-image super-resolution approaches: Stacked Auto-Encoders (SAE) and Convolutional Neural Networks (CNN)
We validate the methods with a database of 1.872 near-infrared iris images with quality assessment and recognition experiments showing the superiority of deep learning approaches over the compared algorithms.
arXiv Detail & Related papers (2023-11-02T13:57:48Z) - Super-Resolution and Image Re-projection for Iris Recognition [67.42500312968455]
Convolutional Neural Networks (CNNs) using different deep learning approaches attempt to recover realistic texture and fine grained details from low resolution images.
In this work we explore the viability of these approaches for iris Super-Resolution (SR) in an iris recognition environment.
Results show that CNNs and image re-projection can improve the results specially for the accuracy of recognition systems.
arXiv Detail & Related papers (2022-10-20T09:46:23Z) - Hierarchical Deep CNN Feature Set-Based Representation Learning for
Robust Cross-Resolution Face Recognition [59.29808528182607]
Cross-resolution face recognition (CRFR) is important in intelligent surveillance and biometric forensics.
Existing shallow learning-based and deep learning-based methods focus on mapping the HR-LR face pairs into a joint feature space.
In this study, we desire to fully exploit the multi-level deep convolutional neural network (CNN) feature set for robust CRFR.
arXiv Detail & Related papers (2021-03-25T14:03:42Z) - Characterization and recognition of handwritten digits using Julia [0.0]
We implement a hybrid neural network model that is capable of recognizing the digit of MNISTdataset.
The proposed neural model network can extract features from the image and recognize the features in the layer by layer.
It also can recognize the auto-encoding system and the variational auto-encoding system of the MNIST dataset.
arXiv Detail & Related papers (2021-02-24T00:30:41Z) - Generalized Iris Presentation Attack Detection Algorithm under
Cross-Database Settings [63.90855798947425]
Presentation attacks pose major challenges to most of the biometric modalities.
We propose a generalized deep learning-based presentation attack detection network, MVANet.
It is inspired by the simplicity and success of hybrid algorithm or fusion of multiple detection networks.
arXiv Detail & Related papers (2020-10-25T22:42:27Z) - Ventral-Dorsal Neural Networks: Object Detection via Selective Attention [51.79577908317031]
We propose a new framework called Ventral-Dorsal Networks (VDNets)
Inspired by the structure of the human visual system, we propose the integration of a "Ventral Network" and a "Dorsal Network"
Our experimental results reveal that the proposed method outperforms state-of-the-art object detection approaches.
arXiv Detail & Related papers (2020-05-15T23:57:36Z) - SIP-SegNet: A Deep Convolutional Encoder-Decoder Network for Joint
Semantic Segmentation and Extraction of Sclera, Iris and Pupil based on
Periocular Region Suppression [8.64118000141143]
multimodal biometric recognition systems have the ability to deal with the limitations of unimodal biometric systems.
Such systems possess high distinctiveness, permanence, and performance while, technologies based on other biometric traits can be easily compromised.
This work presents a novel deep learning framework called SIP-SegNet, which performs the joint semantic segmentation of ocular traits.
arXiv Detail & Related papers (2020-02-15T15:20:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.