Face Recognition: Too Bias, or Not Too Bias?
- URL: http://arxiv.org/abs/2002.06483v4
- Date: Tue, 21 Apr 2020 01:34:11 GMT
- Title: Face Recognition: Too Bias, or Not Too Bias?
- Authors: Joseph P Robinson, Gennady Livitz, Yann Henon, Can Qin, Yun Fu, and
Samson Timoner
- Abstract summary: We reveal critical insights into problems of bias in state-of-the-art facial recognition systems.
We show variations in the optimal scoring threshold for face-pairs across different subgroups.
We also do a human evaluation to measure the bias in humans, which supports the hypothesis that such bias exists in human perception.
- Score: 45.404162391012726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We reveal critical insights into problems of bias in state-of-the-art facial
recognition (FR) systems using a novel Balanced Faces In the Wild (BFW)
dataset: data balanced for gender and ethnic groups. We show variations in the
optimal scoring threshold for face-pairs across different subgroups. Thus, the
conventional approach of learning a global threshold for all pairs resulting in
performance gaps among subgroups. By learning subgroup-specific thresholds, we
not only mitigate problems in performance gaps but also show a notable boost in
the overall performance. Furthermore, we do a human evaluation to measure the
bias in humans, which supports the hypothesis that such a bias exists in human
perception. For the BFW database, source code, and more, visit
github.com/visionjo/facerec-bias-bfw.
Related papers
- GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - Mitigating Algorithmic Bias on Facial Expression Recognition [0.0]
Biased datasets are ubiquitous and present a challenge for machine learning.
The problem of biased datasets is especially sensitive when dealing with minority people groups.
This work explores one way to mitigate bias using a debiasing variational autoencoder with experiments on facial expression recognition.
arXiv Detail & Related papers (2023-12-23T17:41:30Z) - Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous
Pronouns [53.62845317039185]
Bias-measuring datasets play a critical role in detecting biased behavior of language models.
We propose a novel method to collect diverse, natural, and minimally distant text pairs via counterfactual generation.
We show that four pre-trained language models are significantly more inconsistent across different gender groups than within each group.
arXiv Detail & Related papers (2023-02-11T12:11:03Z) - Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher
Mixture Model [7.049738935364298]
In this work, we investigate the gender bias of deep Face Recognition networks.
Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology.
In fact, extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias.
arXiv Detail & Related papers (2022-10-24T23:53:56Z) - Gender Stereotyping Impact in Facial Expression Recognition [1.5340540198612824]
In recent years, machine learning-based models have become the most popular approach to Facial Expression Recognition (FER)
In publicly available FER datasets, apparent gender representation is usually mostly balanced, but their representation in the individual label is not.
We generate derivative datasets with different amounts of stereotypical bias by altering the gender proportions of certain labels.
We observe a discrepancy in the recognition of certain emotions between genders of up to $29 %$ under the worst bias conditions.
arXiv Detail & Related papers (2022-10-11T10:52:23Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Exploring Biases and Prejudice of Facial Synthesis via Semantic Latent
Space [1.858151490268935]
This work targets biased generative models' behaviors, identifying the cause of the biases and eliminating them.
We can (as expected) conclude that biased data causes biased predictions of face frontalization models.
We found that the seemingly obvious choice of 50:50 proportions was not the best for this dataset to reduce biased behavior on female faces.
arXiv Detail & Related papers (2021-08-23T16:09:18Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.