Towards Fair Face Verification: An In-depth Analysis of Demographic
Biases
- URL: http://arxiv.org/abs/2307.10011v1
- Date: Wed, 19 Jul 2023 14:49:14 GMT
- Title: Towards Fair Face Verification: An In-depth Analysis of Demographic
Biases
- Authors: Ioannis Sarridis, Christos Koutlis, Symeon Papadopoulos, Christos Diou
- Abstract summary: Deep learning-based person identification and verification systems have remarkably improved in terms of accuracy in recent years.
However, such systems have been found to exhibit significant biases related to race, age, and gender.
This paper presents an in-depth analysis, with a particular emphasis on the intersectionality of these demographic factors.
- Score: 11.191375513738361
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep learning-based person identification and verification systems have
remarkably improved in terms of accuracy in recent years; however, such
systems, including widely popular cloud-based solutions, have been found to
exhibit significant biases related to race, age, and gender, a problem that
requires in-depth exploration and solutions. This paper presents an in-depth
analysis, with a particular emphasis on the intersectionality of these
demographic factors. Intersectional bias refers to the performance
discrepancies w.r.t. the different combinations of race, age, and gender
groups, an area relatively unexplored in current literature. Furthermore, the
reliance of most state-of-the-art approaches on accuracy as the principal
evaluation metric often masks significant demographic disparities in
performance. To counter this crucial limitation, we incorporate five additional
metrics in our quantitative analysis, including disparate impact and
mistreatment metrics, which are typically ignored by the relevant
fairness-aware approaches. Results on the Racial Faces in-the-Wild (RFW)
benchmark indicate pervasive biases in face recognition systems, extending
beyond race, with different demographic factors yielding significantly
disparate outcomes. In particular, Africans demonstrate an 11.25% lower True
Positive Rate (TPR) compared to Caucasians, while only a 3.51% accuracy drop is
observed. Even more concerning, the intersections of multiple protected groups,
such as African females over 60 years old, demonstrate a +39.89% disparate
mistreatment rate compared to the highest Caucasians rate. By shedding light on
these biases and their implications, this paper aims to stimulate further
research towards developing fairer, more equitable face recognition and
verification systems.
Related papers
- Improving Bias in Facial Attribute Classification: A Combined Impact of KL Divergence induced Loss Function and Dual Attention [3.5527561584422465]
Earlier systems often exhibited demographic bias, particularly in gender and racial classification, with lower accuracy for women and individuals with darker skin tones.
This paper presents a method using a dual attention mechanism with a pre-trained Inception-ResNet V1 model, enhanced by KL-divergence regularization and a cross-entropy loss function.
The experimental results show significant improvements in both fairness and classification accuracy, providing promising advances in addressing bias and enhancing the reliability of facial recognition systems.
arXiv Detail & Related papers (2024-10-15T01:29:09Z) - Deep Generative Views to Mitigate Gender Classification Bias Across
Gender-Race Groups [0.8594140167290097]
We propose a bias mitigation strategy to improve classification accuracy and reduce bias across gender-racial groups.
We leverage the power of generative views, structured learning, and evidential learning towards mitigating gender classification bias.
arXiv Detail & Related papers (2022-08-17T16:23:35Z) - A Deep Dive into Dataset Imbalance and Bias in Face Identification [49.210042420757894]
Media portrayals often center imbalance as the main source of bias in automated face recognition systems.
Previous studies of data imbalance in FR have focused exclusively on the face verification setting.
This work thoroughly explores the effects of each kind of imbalance possible in face identification, and discuss other factors which may impact bias in this setting.
arXiv Detail & Related papers (2022-03-15T20:23:13Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Information-Theoretic Bias Assessment Of Learned Representations Of
Pretrained Face Recognition [18.07966649678408]
We propose an information-theoretic, independent bias assessment metric to identify degree of bias against protected demographic attributes.
Our metric differs from other methods that rely on classification accuracy or examine the differences between ground truth and predicted labels of protected attributes predicted using a shallow network.
arXiv Detail & Related papers (2021-11-08T17:41:17Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - An Examination of Fairness of AI Models for Deepfake Detection [5.4852920337961235]
We evaluate bias present in deepfake datasets and detection models across protected subgroups.
Using facial datasets balanced by race and gender, we examine three popular deepfake detectors and find large disparities in predictive performances across races.
arXiv Detail & Related papers (2021-05-02T21:55:04Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Understanding Fairness of Gender Classification Algorithms Across
Gender-Race Groups [0.8594140167290097]
The aim of this paper is to investigate the differential performance of the gender classification algorithms across gender-race groups.
For all the algorithms used, Black females (Black race in general) always obtained the least accuracy rates.
Middle Eastern males and Latino females obtained higher accuracy rates most of the time.
arXiv Detail & Related papers (2020-09-24T04:56:10Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z) - Investigating Bias in Deep Face Analysis: The KANFace Dataset and
Empirical Study [67.3961439193994]
We introduce the most comprehensive, large-scale dataset of facial images and videos to date.
The data are manually annotated in terms of identity, exact age, gender and kinship.
A method to debias network embeddings is introduced and tested on the proposed benchmarks.
arXiv Detail & Related papers (2020-05-15T00:14:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.