Investigating Bias in Deep Face Analysis: The KANFace Dataset and
Empirical Study
- URL: http://arxiv.org/abs/2005.07302v2
- Date: Wed, 9 Sep 2020 02:00:26 GMT
- Title: Investigating Bias in Deep Face Analysis: The KANFace Dataset and
Empirical Study
- Authors: Markos Georgopoulos, Yannis Panagakis, Maja Pantic
- Abstract summary: We introduce the most comprehensive, large-scale dataset of facial images and videos to date.
The data are manually annotated in terms of identity, exact age, gender and kinship.
A method to debias network embeddings is introduced and tested on the proposed benchmarks.
- Score: 67.3961439193994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based methods have pushed the limits of the state-of-the-art in
face analysis. However, despite their success, these models have raised
concerns regarding their bias towards certain demographics. This bias is
inflicted both by limited diversity across demographics in the training set, as
well as the design of the algorithms. In this work, we investigate the
demographic bias of deep learning models in face recognition, age estimation,
gender recognition and kinship verification. To this end, we introduce the most
comprehensive, large-scale dataset of facial images and videos to date. It
consists of 40K still images and 44K sequences (14.5M video frames in total)
captured in unconstrained, real-world conditions from 1,045 subjects. The data
are manually annotated in terms of identity, exact age, gender and kinship. The
performance of state-of-the-art models is scrutinized and demographic bias is
exposed by conducting a series of experiments. Lastly, a method to debias
network embeddings is introduced and tested on the proposed benchmarks.
Related papers
- Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - FP-Age: Leveraging Face Parsing Attention for Facial Age Estimation in
the Wild [50.8865921538953]
We propose a method to explicitly incorporate facial semantics into age estimation.
We design a face parsing-based network to learn semantic information at different scales.
We show that our method consistently outperforms all existing age estimation methods.
arXiv Detail & Related papers (2021-06-21T14:31:32Z) - Towards measuring fairness in AI: the Casual Conversations dataset [9.246092246471955]
Our dataset is composed of 3,011 subjects and contains over 45,000 videos, with an average of 15 videos per person.
The videos were recorded in multiple U.S. states with a diverse set of adults in various age, gender and apparent skin tone groups.
arXiv Detail & Related papers (2021-04-06T22:48:22Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - FairFace Challenge at ECCV 2020: Analyzing Bias in Face Recognition [26.49981022316179]
The aim of the challenge was to evaluate accuracy and bias in gender and skin colour of submitted algorithms.
The dataset is not balanced, which simulates a real world scenario where AI-based models supposed to present fair outcomes are trained and evaluated on imbalanced data.
The analysis of top-10 teams shows higher false positive rates (and lower false negative rates) for females with dark skin tone.
arXiv Detail & Related papers (2020-09-16T17:56:22Z) - InsideBias: Measuring Bias in Deep Networks and Application to Face
Gender Biometrics [73.85525896663371]
This work explores the biases in learning processes based on deep neural network architectures.
We employ two gender detection models based on popular deep neural networks.
We propose InsideBias, a novel method to detect biased models.
arXiv Detail & Related papers (2020-04-14T15:20:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.