Cross-Spectral Iris Matching Using Conditional Coupled GAN
- URL: http://arxiv.org/abs/2010.11689v1
- Date: Fri, 9 Oct 2020 19:13:24 GMT
- Title: Cross-Spectral Iris Matching Using Conditional Coupled GAN
- Authors: Moktari Mostofa, Fariborz Taherkhani, Jeremy Dawson, Nasser M.
Nasrabadi
- Abstract summary: Cross-spectral iris recognition is emerging as a promising biometric approach to authenticating the identity of individuals.
matching iris images acquired at different spectral bands shows significant performance degradation when compared to single-band near-infrared (NIR) matching.
We propose a conditional coupled generative adversarial network (CpGAN) architecture for cross-spectral iris recognition.
- Score: 22.615156512223766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cross-spectral iris recognition is emerging as a promising biometric approach
to authenticating the identity of individuals. However, matching iris images
acquired at different spectral bands shows significant performance degradation
when compared to single-band near-infrared (NIR) matching due to the spectral
gap between iris images obtained in the NIR and visual-light (VIS) spectra.
Although researchers have recently focused on deep-learning-based approaches to
recover invariant representative features for more accurate recognition
performance, the existing methods cannot achieve the expected accuracy required
for commercial applications. Hence, in this paper, we propose a conditional
coupled generative adversarial network (CpGAN) architecture for cross-spectral
iris recognition by projecting the VIS and NIR iris images into a
low-dimensional embedding domain to explore the hidden relationship between
them. The conditional CpGAN framework consists of a pair of GAN-based networks,
one responsible for retrieving images in the visible domain and other
responsible for retrieving images in the NIR domain. Both networks try to map
the data into a common embedding subspace to ensure maximum pair-wise
similarity between the feature vectors from the two iris modalities of the same
subject. To prove the usefulness of our proposed approach, extensive
experimental results obtained on the PolyU dataset are compared to existing
state-of-the-art cross-spectral recognition methods.
Related papers
- DCN-T: Dual Context Network with Transformer for Hyperspectral Image
Classification [109.09061514799413]
Hyperspectral image (HSI) classification is challenging due to spatial variability caused by complex imaging conditions.
We propose a tri-spectral image generation pipeline that transforms HSI into high-quality tri-spectral images.
Our proposed method outperforms state-of-the-art methods for HSI classification.
arXiv Detail & Related papers (2023-04-19T18:32:52Z) - Physically-Based Face Rendering for NIR-VIS Face Recognition [165.54414962403555]
Near infrared (NIR) to Visible (VIS) face matching is challenging due to the significant domain gaps.
We propose a novel method for paired NIR-VIS facial image generation.
To facilitate the identity feature learning, we propose an IDentity-based Maximum Mean Discrepancy (ID-MMD) loss.
arXiv Detail & Related papers (2022-11-11T18:48:16Z) - A Bidirectional Conversion Network for Cross-Spectral Face Recognition [1.9766522384767227]
Cross-spectral face recognition is challenging due to the dramatic difference between the visible light and IR imageries.
This paper proposes a framework of bidirectional cross-spectral conversion (BCSC-GAN) between the heterogeneous face images.
The network reduces the cross-spectral recognition problem into an intra-spectral problem, and improves performance by fusing bidirectional information.
arXiv Detail & Related papers (2022-05-03T16:20:10Z) - Heterogeneous Visible-Thermal and Visible-Infrared Face Recognition
using Unit-Class Loss and Cross-Modality Discriminator [0.43748379918040853]
We propose an end-to-end framework for cross-modal face recognition.
A novel Unit-Class Loss is proposed for preserving identity information while discarding modality information.
The proposed network can be used to extract modality-independent vector representations or a matching-pair classification for test images.
arXiv Detail & Related papers (2021-11-29T06:14:00Z) - Synthesis-Guided Feature Learning for Cross-Spectral Periocular
Recognition [1.52292571922932]
We propose a novel approach to cross-spectral periocular verification.
It primarily focuses on learning a mapping from visible and NIR periocular images to a shared latent representational subspace.
We show the auxiliary image reconstruction task results in learning a more discriminative, domain-invariant subspace.
arXiv Detail & Related papers (2021-11-16T19:22:20Z) - Iris Recognition Based on SIFT Features [63.07521951102555]
We use the Scale Invariant Feature Transformation (SIFT) for recognition using iris images.
We extract characteristic SIFT feature points in scale space and perform matching based on the texture information around the feature points using the SIFT operator.
We also show the complement between the SIFT approach and a popular matching approach based on transformation to polar coordinates and Log-Gabor wavelets.
arXiv Detail & Related papers (2021-10-30T04:55:33Z) - Deep GAN-Based Cross-Spectral Cross-Resolution Iris Recognition [15.425678759101203]
Cross-spectral iris recognition has emerged as a promising biometric approach to establish the identity of individuals.
matching iris images acquired at different spectral bands (i.e., matching a visible (VIS) iris probe to a gallery of near-infrared (NIR) iris images or vice versa) shows a significant performance degradation.
We have investigated a range of deep convolutional generative adversarial network (DCGAN) architectures to further improve the accuracy of cross-spectral iris recognition methods.
arXiv Detail & Related papers (2021-08-03T15:30:04Z) - Cross-Spectral Periocular Recognition with Conditional Adversarial
Networks [59.17685450892182]
We propose Conditional Generative Adversarial Networks, trained to con-vert periocular images between visible and near-infrared spectra.
We obtain a cross-spectral periocular performance of EER=1%, and GAR>99% @ FAR=1%, which is comparable to the state-of-the-art with the PolyU database.
arXiv Detail & Related papers (2020-08-26T15:02:04Z) - Multi-Margin based Decorrelation Learning for Heterogeneous Face
Recognition [90.26023388850771]
This paper presents a deep neural network approach to extract decorrelation representations in a hyperspherical space for cross-domain face images.
The proposed framework can be divided into two components: heterogeneous representation network and decorrelation representation learning.
Experimental results on two challenging heterogeneous face databases show that our approach achieves superior performance on both verification and recognition tasks.
arXiv Detail & Related papers (2020-05-25T07:01:12Z) - Spectrum Translation for Cross-Spectral Ocular Matching [59.17685450892182]
Cross-spectral verification remains a big issue in biometrics, especially for the ocular area.
We investigate the use of Conditional Adversarial Networks for spectrum translation between near infra-red and visual light images for ocular biometrics.
arXiv Detail & Related papers (2020-02-14T19:30:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.