Quantifying the Extent to Which Race and Gender Features Determine
Identity in Commercial Face Recognition Algorithms
- URL: http://arxiv.org/abs/2010.07979v1
- Date: Thu, 15 Oct 2020 18:52:36 GMT
- Title: Quantifying the Extent to Which Race and Gender Features Determine
Identity in Commercial Face Recognition Algorithms
- Authors: John J. Howard, Yevgeniy B. Sirotin, Jerry L. Tipton, and Arun R.
Vemury
- Abstract summary: Black-box commercial face recognition algorithms (CFRAs) use gender and race features to determine identity.
This study quantified the degree to which gender and race features influenced face recognition similarity scores between different people.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human face features can be used to determine individual identity as well as
demographic information like gender and race. However, the extent to which
black-box commercial face recognition algorithms (CFRAs) use gender and race
features to determine identity is poorly understood despite increasing
deployments by government and industry. In this study, we quantified the degree
to which gender and race features influenced face recognition similarity scores
between different people, i.e. non-mated scores. We ran this study using five
different CFRAs and a sample of 333 diverse test subjects. As a control, we
compared the behavior of these non-mated distributions to a commercial iris
recognition algorithm (CIRA). Confirming prior work, all CFRAs produced higher
similarity scores for people of the same gender and race, an effect known as
"broad homogeneity". No such effect was observed for the CIRA. Next, we applied
principal components analysis (PCA) to similarity score matrices. We show that
some principal components (PCs) of CFRAs cluster people by gender and race, but
the majority do not. Demographic clustering in the PCs accounted for only 10 %
of the total CFRA score variance. No clustering was observed for the CIRA. This
demonstrates that, although CFRAs use some gender and race features to
establish identity, most features utilized by current CFRAs are unrelated to
gender and race, similar to the iris texture patterns utilized by the CIRA.
Finally, reconstruction of similarity score matrices using only PCs that showed
no demographic clustering reduced broad homogeneity effects, but also decreased
the separation between mated and non-mated scores. This suggests it's possible
for CFRAs to operate on features unrelated to gender and race, albeit with
somewhat lower recognition accuracy, but that this is not the current
commercial practice.
Related papers
- GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models [73.23743278545321]
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but have also been observed to magnify societal biases.
GenderCARE is a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics.
arXiv Detail & Related papers (2024-08-22T15:35:46Z) - Towards Fair Face Verification: An In-depth Analysis of Demographic
Biases [11.191375513738361]
Deep learning-based person identification and verification systems have remarkably improved in terms of accuracy in recent years.
However, such systems have been found to exhibit significant biases related to race, age, and gender.
This paper presents an in-depth analysis, with a particular emphasis on the intersectionality of these demographic factors.
arXiv Detail & Related papers (2023-07-19T14:49:14Z) - Deep Generative Views to Mitigate Gender Classification Bias Across
Gender-Race Groups [0.8594140167290097]
We propose a bias mitigation strategy to improve classification accuracy and reduce bias across gender-racial groups.
We leverage the power of generative views, structured learning, and evidential learning towards mitigating gender classification bias.
arXiv Detail & Related papers (2022-08-17T16:23:35Z) - Social Norm Bias: Residual Harms of Fairness-Aware Algorithms [21.50551404445654]
Social Norm Bias (SNoB) is a subtle but consequential type of discrimination that may be exhibited by automated decision-making systems.
We quantify SNoB by measuring how an algorithm's predictions are associated with conformity to gender norms.
We show that post-processing interventions do not mitigate this type of bias at all.
arXiv Detail & Related papers (2021-08-25T05:54:56Z) - Learning Fair Face Representation With Progressive Cross Transformer [79.73754444296213]
We propose a progressive cross transformer (PCT) method for fair face recognition.
We show that PCT is capable of mitigating bias in face recognition while achieving state-of-the-art FR performance.
arXiv Detail & Related papers (2021-08-11T01:31:14Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Feature Representation in Deep Metric Embeddings [0.0]
This study takes embeddings trained to discriminate faces (identities) and uses unsupervised clustering to identify the features involved in facial identity discrimination.
In the intra class scenario, the inference process distinguishes common attributes between single identities, achieving 90.0% and 76.0% accuracy for beards and glasses, respectively.
The system can also perform extra class sub-discrimination with a high accuracy rate, notably 99.3%, 99.3% and 94.1% for gender, skin tone, and age, respectively.
arXiv Detail & Related papers (2021-02-05T13:53:10Z) - One Label, One Billion Faces: Usage and Consistency of Racial Categories
in Computer Vision [75.82110684355979]
We study the racial system encoded by computer vision datasets supplying categorical race labels for face images.
We find that each dataset encodes a substantially unique racial system, despite nominally equivalent racial categories.
We find evidence that racial categories encode stereotypes, and exclude ethnic groups from categories on the basis of nonconformity to stereotypes.
arXiv Detail & Related papers (2021-02-03T22:50:04Z) - Understanding Fairness of Gender Classification Algorithms Across
Gender-Race Groups [0.8594140167290097]
The aim of this paper is to investigate the differential performance of the gender classification algorithms across gender-race groups.
For all the algorithms used, Black females (Black race in general) always obtained the least accuracy rates.
Middle Eastern males and Latino females obtained higher accuracy rates most of the time.
arXiv Detail & Related papers (2020-09-24T04:56:10Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.