Measuring Hidden Bias within Face Recognition via Racial Phenotypes
- URL: http://arxiv.org/abs/2110.09839v1
- Date: Tue, 19 Oct 2021 10:46:59 GMT
- Title: Measuring Hidden Bias within Face Recognition via Racial Phenotypes
- Authors: Seyma Yucer, Furkan Tektas, Noura Al Moubayed and Toby P. Breckon
- Abstract summary: This study introduces an alternative racial bias analysis methodology via facial phenotype attributes for face recognition.
We propose categorical test cases to investigate the individual influence of those attributes on bias within face recognition tasks.
- Score: 21.74534280021516
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work reports disparate performance for intersectional racial groups
across face recognition tasks: face verification and identification. However,
the definition of those racial groups has a significant impact on the
underlying findings of such racial bias analysis. Previous studies define these
groups based on either demographic information (e.g. African, Asian etc.) or
skin tone (e.g. lighter or darker skins). The use of such sensitive or broad
group definitions has disadvantages for bias investigation and subsequent
counter-bias solutions design. By contrast, this study introduces an
alternative racial bias analysis methodology via facial phenotype attributes
for face recognition. We use the set of observable characteristics of an
individual face where a race-related facial phenotype is hence specific to the
human face and correlated to the racial profile of the subject. We propose
categorical test cases to investigate the individual influence of those
attributes on bias within face recognition tasks. We compare our
phenotype-based grouping methodology with previous grouping strategies and show
that phenotype-based groupings uncover hidden bias without reliance upon any
potentially protected attributes or ill-defined grouping strategies.
Furthermore, we contribute corresponding phenotype attribute category labels
for two face recognition tasks: RFW for face verification and VGGFace2 (test
set) for face identification.
Related papers
- Racial Bias within Face Recognition: A Survey [15.924281804465252]
We discuss the problem definition of racial bias, starting with race definition, grouping strategies, and the societal implications of using race or race-related groupings.
We divide the common face recognition processing pipeline into four stages: image acquisition, face localisation, face representation, face verification and identification.
The overall aim is to provide comprehensive coverage of the racial bias problem with respect to each and every stage of the face recognition processing pipeline.
arXiv Detail & Related papers (2023-05-01T13:33:12Z) - The Impact of Racial Distribution in Training Data on Face Recognition
Bias: A Closer Look [0.0]
We study the effect of racial distribution in the training data on the performance of face recognition models.
We analyze these trained models using accuracy metrics, clustering metrics, UMAP projections, face quality, and decision thresholds.
arXiv Detail & Related papers (2022-11-26T07:03:24Z) - Are Face Detection Models Biased? [69.68854430664399]
We investigate possible bias in the domain of face detection through facial region localization.
Most existing face detection datasets lack suitable annotation for such analysis.
We observe a high disparity in detection accuracies across gender and skin-tone, and interplay of confounding factors beyond demography.
arXiv Detail & Related papers (2022-11-07T14:27:55Z) - Explaining Bias in Deep Face Recognition via Image Characteristics [9.569575076277523]
We evaluate ten state-of-the-art face recognition models, comparing their fairness in terms of security and usability on two data sets.
We then analyze the impact of image characteristics on models performance.
arXiv Detail & Related papers (2022-08-23T17:18:23Z) - Towards Intrinsic Common Discriminative Features Learning for Face
Forgery Detection using Adversarial Learning [59.548960057358435]
We propose a novel method which utilizes adversarial learning to eliminate the negative effect of different forgery methods and facial identities.
Our face forgery detection model learns to extract common discriminative features through eliminating the effect of forgery methods and facial identities.
arXiv Detail & Related papers (2022-07-08T09:23:59Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Learning Fair Face Representation With Progressive Cross Transformer [79.73754444296213]
We propose a progressive cross transformer (PCT) method for fair face recognition.
We show that PCT is capable of mitigating bias in face recognition while achieving state-of-the-art FR performance.
arXiv Detail & Related papers (2021-08-11T01:31:14Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z) - Asymmetric Rejection Loss for Fairer Face Recognition [1.52292571922932]
Research has shown differences in face recognition performance across different ethnic groups due to the racial imbalance in the training datasets.
This is actually symptomatic of the under-representation of non-Caucasian ethnic groups in the celebdom from which face datasets are usually gathered.
We propose an Asymmetric Rejection Loss, which aims at making full use of unlabeled images of those under-represented groups.
arXiv Detail & Related papers (2020-02-09T04:01:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.