Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher
Mixture Model
- URL: http://arxiv.org/abs/2210.13664v3
- Date: Thu, 22 Feb 2024 17:01:36 GMT
- Title: Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher
Mixture Model
- Authors: Jean-R\'emy Conti, Nathan Noiry, Vincent Despiegel, St\'ephane
Gentric, St\'ephan Cl\'emen\c{c}on
- Abstract summary: In this work, we investigate the gender bias of deep Face Recognition networks.
Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology.
In fact, extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias.
- Score: 7.049738935364298
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In spite of the high performance and reliability of deep learning algorithms
in a wide range of everyday applications, many investigations tend to show that
a lot of models exhibit biases, discriminating against specific subgroups of
the population (e.g. gender, ethnicity). This urges the practitioner to develop
fair systems with a uniform/comparable performance across sensitive groups. In
this work, we investigate the gender bias of deep Face Recognition networks. In
order to measure this bias, we introduce two new metrics, $\mathrm{BFAR}$ and
$\mathrm{BFRR}$, that better reflect the inherent deployment needs of Face
Recognition systems. Motivated by geometric considerations, we mitigate gender
bias through a new post-processing methodology which transforms the deep
embeddings of a pre-trained model to give more representation power to
discriminated subgroups. It consists in training a shallow neural network by
minimizing a Fair von Mises-Fisher loss whose hyperparameters account for the
intra-class variance of each gender. Interestingly, we empirically observe that
these hyperparameters are correlated with our fairness metrics. In fact,
extensive numerical experiments on a variety of datasets show that a careful
selection significantly reduces gender bias. The code used for the experiments
can be found at https://github.com/JRConti/EthicalModule_vMF.
Related papers
- Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural
Networks [7.763173131630868]
We propose two metrics to quantitatively evaluate the class-wise bias of two models in comparison to one another.
By evaluating the performance of these new metrics and by demonstrating their practical application, we show that they can be used to measure fairness as well as bias.
arXiv Detail & Related papers (2021-10-08T22:35:34Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Fairness Through Robustness: Investigating Robustness Disparity in Deep
Learning [61.93730166203915]
We argue that traditional notions of fairness are not sufficient when the model is vulnerable to adversarial attacks.
We show that measuring robustness bias is a challenging task for DNNs and propose two methods to measure this form of bias.
arXiv Detail & Related papers (2020-06-17T22:22:24Z) - Do Neural Ranking Models Intensify Gender Bias? [13.37092521347171]
We first provide a bias measurement framework which includes two metrics to quantify the degree of the unbalanced presence of gender-related concepts in a given IR model's ranking list.
Applying these queries to the MS MARCO Passage retrieval collection, we then measure the gender bias of a BM25 model and several recent neural ranking models.
Results show that while all models are strongly biased toward male, the neural models, and in particular the ones based on contextualized embedding models, significantly intensify gender bias.
arXiv Detail & Related papers (2020-05-01T13:31:11Z) - InsideBias: Measuring Bias in Deep Networks and Application to Face
Gender Biometrics [73.85525896663371]
This work explores the biases in learning processes based on deep neural network architectures.
We employ two gender detection models based on popular deep neural networks.
We propose InsideBias, a novel method to detect biased models.
arXiv Detail & Related papers (2020-04-14T15:20:50Z) - Face Recognition: Too Bias, or Not Too Bias? [45.404162391012726]
We reveal critical insights into problems of bias in state-of-the-art facial recognition systems.
We show variations in the optimal scoring threshold for face-pairs across different subgroups.
We also do a human evaluation to measure the bias in humans, which supports the hypothesis that such bias exists in human perception.
arXiv Detail & Related papers (2020-02-16T01:08:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.