Cross-Spectral Periocular Recognition with Conditional Adversarial
Networks
- URL: http://arxiv.org/abs/2008.11604v1
- Date: Wed, 26 Aug 2020 15:02:04 GMT
- Title: Cross-Spectral Periocular Recognition with Conditional Adversarial
Networks
- Authors: Kevin Hernandez-Diaz, Fernando Alonso-Fernandez, Josef Bigun
- Abstract summary: We propose Conditional Generative Adversarial Networks, trained to con-vert periocular images between visible and near-infrared spectra.
We obtain a cross-spectral periocular performance of EER=1%, and GAR>99% @ FAR=1%, which is comparable to the state-of-the-art with the PolyU database.
- Score: 59.17685450892182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work addresses the challenge of comparing periocular images captured in
different spectra, which is known to produce significant drops in performance
in comparison to operating in the same spectrum. We propose the use of
Conditional Generative Adversarial Networks, trained to con-vert periocular
images between visible and near-infrared spectra, so that biometric
verification is carried out in the same spectrum. The proposed setup allows the
use of existing feature methods typically optimized to operate in a single
spectrum. Recognition experiments are done using a number of off-the-shelf
periocular comparators based both on hand-crafted features and CNN descriptors.
Using the Hong Kong Polytechnic University Cross-Spectral Iris Images Database
(PolyU) as benchmark dataset, our experiments show that cross-spectral
performance is substantially improved if both images are converted to the same
spectrum, in comparison to matching features extracted from images in different
spectra. In addition to this, we fine-tune a CNN based on the ResNet50
architecture, obtaining a cross-spectral periocular performance of EER=1%, and
GAR>99% @ FAR=1%, which is comparable to the state-of-the-art with the PolyU
database.
Related papers
- Spectrum Translation for Refinement of Image Generation (STIG) Based on
Contrastive Learning and Spectral Filter Profile [15.5188527312094]
We propose a framework to mitigate the disparity in frequency domain of the generated images.
This is realized by spectrum translation for the refinement of image generation (STIG) based on contrastive learning.
We evaluate our framework across eight fake image datasets and various cutting-edge models to demonstrate the effectiveness of STIG.
arXiv Detail & Related papers (2024-03-08T06:39:24Z) - Exploring Invariant Representation for Visible-Infrared Person
Re-Identification [77.06940947765406]
Cross-spectral person re-identification, which aims to associate identities to pedestrians across different spectra, faces a main challenge of the modality discrepancy.
In this paper, we address the problem from both image-level and feature-level in an end-to-end hybrid learning framework named robust feature mining network (RFM)
Experiment results on two standard cross-spectral person re-identification datasets, RegDB and SYSU-MM01, have demonstrated state-of-the-art performance.
arXiv Detail & Related papers (2023-02-02T05:24:50Z) - Compact multi-scale periocular recognition using SAFE features [63.48764893706088]
We present a new approach for periocular recognition based on the Symmetry Assessment by Feature Expansion (SAFE) descriptor.
We use the sclera center as single key point for feature extraction, highlighting the object-like identity properties that concentrates to this point unique of the eye.
arXiv Detail & Related papers (2022-10-18T11:46:38Z) - Synthesis-Guided Feature Learning for Cross-Spectral Periocular
Recognition [1.52292571922932]
We propose a novel approach to cross-spectral periocular verification.
It primarily focuses on learning a mapping from visible and NIR periocular images to a shared latent representational subspace.
We show the auxiliary image reconstruction task results in learning a more discriminative, domain-invariant subspace.
arXiv Detail & Related papers (2021-11-16T19:22:20Z) - Iris Recognition Based on SIFT Features [63.07521951102555]
We use the Scale Invariant Feature Transformation (SIFT) for recognition using iris images.
We extract characteristic SIFT feature points in scale space and perform matching based on the texture information around the feature points using the SIFT operator.
We also show the complement between the SIFT approach and a popular matching approach based on transformation to polar coordinates and Log-Gabor wavelets.
arXiv Detail & Related papers (2021-10-30T04:55:33Z) - There and Back Again: Self-supervised Multispectral Correspondence
Estimation [13.56924750612194]
We introduce a novel cycle-consistency metric that allows us to self-supervise. This, combined with our spectra-agnostic loss functions, allows us to train the same network across multiple spectra.
We demonstrate our approach on the challenging task of dense RGB-FIR correspondence estimation.
arXiv Detail & Related papers (2021-03-19T12:33:56Z) - Cross-Spectral Iris Matching Using Conditional Coupled GAN [22.615156512223766]
Cross-spectral iris recognition is emerging as a promising biometric approach to authenticating the identity of individuals.
matching iris images acquired at different spectral bands shows significant performance degradation when compared to single-band near-infrared (NIR) matching.
We propose a conditional coupled generative adversarial network (CpGAN) architecture for cross-spectral iris recognition.
arXiv Detail & Related papers (2020-10-09T19:13:24Z) - A Similarity Inference Metric for RGB-Infrared Cross-Modality Person
Re-identification [66.49212581685127]
Cross-modality person re-identification (re-ID) is a challenging task due to the large discrepancy between IR and RGB modalities.
Existing methods address this challenge typically by aligning feature distributions or image styles across modalities.
This paper presents a novel similarity inference metric (SIM) that exploits the intra-modality sample similarities to circumvent the cross-modality discrepancy.
arXiv Detail & Related papers (2020-07-03T05:28:13Z) - Spectrum Translation for Cross-Spectral Ocular Matching [59.17685450892182]
Cross-spectral verification remains a big issue in biometrics, especially for the ocular area.
We investigate the use of Conditional Adversarial Networks for spectrum translation between near infra-red and visual light images for ocular biometrics.
arXiv Detail & Related papers (2020-02-14T19:30:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.