InsideBias: Measuring Bias in Deep Networks and Application to Face
Gender Biometrics
- URL: http://arxiv.org/abs/2004.06592v3
- Date: Wed, 22 Jul 2020 10:24:18 GMT
- Title: InsideBias: Measuring Bias in Deep Networks and Application to Face
Gender Biometrics
- Authors: Ignacio Serna, Alejandro Pe\~na, Aythami Morales, and Julian Fierrez
- Abstract summary: This work explores the biases in learning processes based on deep neural network architectures.
We employ two gender detection models based on popular deep neural networks.
We propose InsideBias, a novel method to detect biased models.
- Score: 73.85525896663371
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work explores the biases in learning processes based on deep neural
network architectures. We analyze how bias affects deep learning processes
through a toy example using the MNIST database and a case study in gender
detection from face images. We employ two gender detection models based on
popular deep neural networks. We present a comprehensive analysis of bias
effects when using an unbalanced training dataset on the features learned by
the models. We show how bias impacts in the activations of gender detection
models based on face images. We finally propose InsideBias, a novel method to
detect biased models. InsideBias is based on how the models represent the
information instead of how they perform, which is the normal practice in other
existing methods for bias detection. Our strategy with InsideBias allows to
detect biased models with very few samples (only 15 images in our case study).
Our experiments include 72K face images from 24K identities and 3 ethnic
groups.
Related papers
- Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher
Mixture Model [7.049738935364298]
In this work, we investigate the gender bias of deep Face Recognition networks.
Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology.
In fact, extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias.
arXiv Detail & Related papers (2022-10-24T23:53:56Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Are Gender-Neutral Queries Really Gender-Neutral? Mitigating Gender Bias
in Image Search [8.730027941735804]
We study a unique gender bias in image search.
The search images are often gender-imbalanced for gender-neutral natural language queries.
We introduce two novel debiasing approaches.
arXiv Detail & Related papers (2021-09-12T04:47:33Z) - IFBiD: Inference-Free Bias Detection [13.492626767817017]
This paper is the first to explore an automatic way to detect bias in deep convolutional neural networks by simply looking at their weights.
We analyze how bias is encoded in the weights of deep networks through a toy example using the Colored MNIST database.
arXiv Detail & Related papers (2021-09-09T16:01:31Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - LOGAN: Local Group Bias Detection by Clustering [86.38331353310114]
We argue that evaluating bias at the corpus level is not enough for understanding how biases are embedded in a model.
We propose LOGAN, a new bias detection technique based on clustering.
Experiments on toxicity classification and object classification tasks show that LOGAN identifies bias in a local region.
arXiv Detail & Related papers (2020-10-06T16:42:51Z) - Investigating Bias in Deep Face Analysis: The KANFace Dataset and
Empirical Study [67.3961439193994]
We introduce the most comprehensive, large-scale dataset of facial images and videos to date.
The data are manually annotated in terms of identity, exact age, gender and kinship.
A method to debias network embeddings is introduced and tested on the proposed benchmarks.
arXiv Detail & Related papers (2020-05-15T00:14:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.