Probing Fairness of Mobile Ocular Biometrics Methods Across Gender on
VISOB 2.0 Dataset
- URL: http://arxiv.org/abs/2011.08898v1
- Date: Tue, 17 Nov 2020 19:32:56 GMT
- Title: Probing Fairness of Mobile Ocular Biometrics Methods Across Gender on
VISOB 2.0 Dataset
- Authors: Anoop Krishnan, Ali Almadan, Ajita Rattani
- Abstract summary: This study aims to explore the fairness of ocular-based authentication and gender classification methods across males and females.
Experimental results suggest the equivalent performance of males and females for ocular-based mobile user-authentication.
Males significantly outperformed females in deep learning based gender classification models based on ocular-region.
- Score: 0.8594140167290097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research has questioned the fairness of face-based recognition and
attribute classification methods (such as gender and race) for dark-skinned
people and women. Ocular biometrics in the visible spectrum is an alternate
solution over face biometrics, thanks to its accuracy, security, robustness
against facial expression, and ease of use in mobile devices. With the recent
COVID-19 crisis, ocular biometrics has a further advantage over face biometrics
in the presence of a mask. However, fairness of ocular biometrics has not been
studied till now. This first study aims to explore the fairness of ocular-based
authentication and gender classification methods across males and females. To
this aim, VISOB $2.0$ dataset, along with its gender annotations, is used for
the fairness analysis of ocular biometrics methods based on ResNet-50,
MobileNet-V2 and lightCNN-29 models. Experimental results suggest the
equivalent performance of males and females for ocular-based mobile
user-authentication in terms of genuine match rate (GMR) at lower false match
rates (FMRs) and an overall Area Under Curve (AUC). For instance, an AUC of
0.96 for females and 0.95 for males was obtained for lightCNN-29 on an average.
However, males significantly outperformed females in deep learning based gender
classification models based on ocular-region.
Related papers
- FaceSaliencyAug: Mitigating Geographic, Gender and Stereotypical Biases via Saliency-Based Data Augmentation [46.74201905814679]
We present an approach named FaceSaliencyAug aimed at addressing the gender bias in computer vision models.
We quantify dataset diversity using Image Similarity Score (ISS) across five datasets, including Flickr Faces HQ (FFHQ), WIKI, IMDB, Labelled Faces in the Wild (LFW), UTK Faces, and Diverse dataset.
Our experiments reveal a reduction in gender bias for both CNNs and ViTs, indicating the efficacy of our method in promoting fairness and inclusivity in computer vision models.
arXiv Detail & Related papers (2024-10-17T22:36:52Z) - GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - Deep Ear Biometrics for Gender Classification [3.285531771049763]
We have developed a deep convolutional neural network (CNN) model for automatic gender classification using the samples of ear images.
The proposed model has achieved 93% accuracy on the EarVN1.0 ear dataset.
arXiv Detail & Related papers (2023-08-17T06:15:52Z) - Are Face Detection Models Biased? [69.68854430664399]
We investigate possible bias in the domain of face detection through facial region localization.
Most existing face detection datasets lack suitable annotation for such analysis.
We observe a high disparity in detection accuracies across gender and skin-tone, and interplay of confounding factors beyond demography.
arXiv Detail & Related papers (2022-11-07T14:27:55Z) - Facial Soft Biometrics for Recognition in the Wild: Recent Works,
Annotation, and COTS Evaluation [63.05890836038913]
We study the role of soft biometrics to enhance person recognition systems in unconstrained scenarios.
We consider two assumptions: 1) manual estimation of soft biometrics and 2) automatic estimation from two commercial off-the-shelf systems.
Experiments are carried out fusing soft biometrics with two state-of-the-art face recognition systems based on deep learning.
arXiv Detail & Related papers (2022-10-24T11:29:57Z) - Investigating Fairness of Ocular Biometrics Among Young, Middle-Aged,
and Older Adults [0.0]
There is a recent urge to investigate the bias of different biometric modalities toward the deployment of fair and trustworthy biometric solutions.
This paper aims to evaluate the fairness of ocular biometrics in the visible spectrum among age groups; young, middle, and older adults.
arXiv Detail & Related papers (2021-10-04T18:03:18Z) - Faces in the Wild: Efficient Gender Recognition in Surveillance
Conditions [0.0]
We present frontal and wild face versions of three well-known surveillance datasets.
We propose a model that effectively and dynamically combines facial and body information, which makes it suitable for gender recognition in wild conditions.
Our model combines facial and body information through a learnable fusion matrix and a channel-attention sub-network, focusing on the most influential body parts according to the specific image/subject features.
arXiv Detail & Related papers (2021-07-14T17:02:23Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - InsideBias: Measuring Bias in Deep Networks and Application to Face
Gender Biometrics [73.85525896663371]
This work explores the biases in learning processes based on deep neural network architectures.
We employ two gender detection models based on popular deep neural networks.
We propose InsideBias, a novel method to detect biased models.
arXiv Detail & Related papers (2020-04-14T15:20:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.