UFPR-Periocular: A Periocular Dataset Collected by Mobile Devices in
Unconstrained Scenarios
- URL: http://arxiv.org/abs/2011.12427v1
- Date: Tue, 24 Nov 2020 22:20:37 GMT
- Title: UFPR-Periocular: A Periocular Dataset Collected by Mobile Devices in
Unconstrained Scenarios
- Authors: Luiz A. Zanlorensi and Rayson Laroca and Diego R. Lucio and Lucas R.
Santos and Alceu S. Britto Jr. and David Menotti
- Abstract summary: We present a new periocular dataset containing samples from 1,122 subjects, acquired in 3 sessions by 196 different mobile devices.
The images were captured under unconstrained environments with just a single instruction to the participants: to place their eyes on a region of interest.
- Score: 4.229481360022994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, ocular biometrics in unconstrained environments using images
obtained at visible wavelength have gained the researchers' attention,
especially with images captured by mobile devices. Periocular recognition has
been demonstrated to be an alternative when the iris trait is not available due
to occlusions or low image resolution. However, the periocular trait does not
have the high uniqueness presented in the iris trait. Thus, the use of datasets
containing many subjects is essential to assess biometric systems' capacity to
extract discriminating information from the periocular region. Also, to address
the within-class variability caused by lighting and attributes in the
periocular region, it is of paramount importance to use datasets with images of
the same subject captured in distinct sessions. As the datasets available in
the literature do not present all these factors, in this work, we present a new
periocular dataset containing samples from 1,122 subjects, acquired in 3
sessions by 196 different mobile devices. The images were captured under
unconstrained environments with just a single instruction to the participants:
to place their eyes on a region of interest. We also performed an extensive
benchmark with several Convolutional Neural Network (CNN) architectures and
models that have been employed in state-of-the-art approaches based on
Multi-class Classification, Multitask Learning, Pairwise Filters Network, and
Siamese Network. The results achieved in the closed- and open-world protocol,
considering the identification and verification tasks, show that this area
still needs research and development.
Related papers
- Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Diffusion Facial Forgery Detection [56.69763252655695]
This paper introduces DiFF, a comprehensive dataset dedicated to face-focused diffusion-generated images.
We conduct extensive experiments on the DiFF dataset via a human test and several representative forgery detection methods.
The results demonstrate that the binary detection accuracy of both human observers and automated detectors often falls below 30%.
arXiv Detail & Related papers (2024-01-29T03:20:19Z) - Image complexity based fMRI-BOLD visual network categorization across
visual datasets using topological descriptors and deep-hybrid learning [3.522950356329991]
The aim of this study is to examine how network topology differs in response to distinct visual stimuli from visual datasets.
To achieve this, 0- and 1-dimensional persistence diagrams are computed for each visual network representing COCO, ImageNet, and SUN.
The extracted K-means cluster features are fed to a novel deep-hybrid model that yields accuracy in the range of 90%-95% in classifying these visual networks.
arXiv Detail & Related papers (2023-11-03T14:05:57Z) - Periocular biometrics: databases, algorithms and directions [69.35569554213679]
Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions.
This paper presents a review of the state of the art in periocular biometric research.
arXiv Detail & Related papers (2023-07-26T11:14:36Z) - Unpaired Image-to-Image Translation with Limited Data to Reveal Subtle
Phenotypes [0.5076419064097732]
We present an improved CycleGAN architecture that employs self-supervised discriminators to alleviate the need for numerous images.
We also provide results obtained with small biological datasets on obvious and non-obvious cell phenotype variations.
arXiv Detail & Related papers (2023-01-21T16:25:04Z) - Periocular Biometrics: A Modality for Unconstrained Scenarios [66.93179447621188]
Periocular biometrics includes the externally visible region of the face that surrounds the eye socket.
The COVID-19 pandemic has highlighted its importance, as the ocular region remained the only visible facial area even in controlled settings.
arXiv Detail & Related papers (2022-12-28T12:08:27Z) - EllSeg-Gen, towards Domain Generalization for head-mounted eyetracking [19.913297057204357]
We show that convolutional networks excel at extracting gaze features despite the presence of such artifacts.
We compare the performance of a single model trained with multiple datasets against a pool of models trained on individual datasets.
Results indicate that models tested on datasets in which eye images exhibit higher appearance variability benefit from multiset training.
arXiv Detail & Related papers (2022-05-04T08:35:52Z) - Robust Data Hiding Using Inverse Gradient Attention [82.73143630466629]
In the data hiding task, each pixel of cover images should be treated differently since they have divergent tolerabilities.
We propose a novel deep data hiding scheme with Inverse Gradient Attention (IGA), combing the ideas of adversarial learning and attention mechanism.
Empirically, extensive experiments show that the proposed model outperforms the state-of-the-art methods on two prevalent datasets.
arXiv Detail & Related papers (2020-11-21T19:08:23Z) - Microscopic fine-grained instance classification through deep attention [7.50282814989294]
Fine-grained classification of microscopic image data with limited samples is an open problem in computer vision and biomedical imaging.
We propose a simple yet effective deep network that performs two tasks simultaneously in an end-to-end manner.
The result is a robust but lightweight end-to-end trainable deep network that yields state-of-the-art results.
arXiv Detail & Related papers (2020-10-06T15:29:58Z) - Cross-Spectral Periocular Recognition with Conditional Adversarial
Networks [59.17685450892182]
We propose Conditional Generative Adversarial Networks, trained to con-vert periocular images between visible and near-infrared spectra.
We obtain a cross-spectral periocular performance of EER=1%, and GAR>99% @ FAR=1%, which is comparable to the state-of-the-art with the PolyU database.
arXiv Detail & Related papers (2020-08-26T15:02:04Z) - SIP-SegNet: A Deep Convolutional Encoder-Decoder Network for Joint
Semantic Segmentation and Extraction of Sclera, Iris and Pupil based on
Periocular Region Suppression [8.64118000141143]
multimodal biometric recognition systems have the ability to deal with the limitations of unimodal biometric systems.
Such systems possess high distinctiveness, permanence, and performance while, technologies based on other biometric traits can be easily compromised.
This work presents a novel deep learning framework called SIP-SegNet, which performs the joint semantic segmentation of ocular traits.
arXiv Detail & Related papers (2020-02-15T15:20:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.