Are Face Detection Models Biased?
- URL: http://arxiv.org/abs/2211.03588v1
- Date: Mon, 7 Nov 2022 14:27:55 GMT
- Title: Are Face Detection Models Biased?
- Authors: Surbhi Mittal, Kartik Thakral, Puspita Majumdar, Mayank Vatsa, Richa
Singh
- Abstract summary: We investigate possible bias in the domain of face detection through facial region localization.
Most existing face detection datasets lack suitable annotation for such analysis.
We observe a high disparity in detection accuracies across gender and skin-tone, and interplay of confounding factors beyond demography.
- Score: 69.68854430664399
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The presence of bias in deep models leads to unfair outcomes for certain
demographic subgroups. Research in bias focuses primarily on facial recognition
and attribute prediction with scarce emphasis on face detection. Existing
studies consider face detection as binary classification into 'face' and
'non-face' classes. In this work, we investigate possible bias in the domain of
face detection through facial region localization which is currently
unexplored. Since facial region localization is an essential task for all face
recognition pipelines, it is imperative to analyze the presence of such bias in
popular deep models. Most existing face detection datasets lack suitable
annotation for such analysis. Therefore, we web-curate the Fair Face
Localization with Attributes (F2LA) dataset and manually annotate more than 10
attributes per face, including facial localization information. Utilizing the
extensive annotations from F2LA, an experimental setup is designed to study the
performance of four pre-trained face detectors. We observe (i) a high disparity
in detection accuracies across gender and skin-tone, and (ii) interplay of
confounding factors beyond demography. The F2LA data and associated annotations
can be accessed at http://iab-rubric.org/index.php/F2LA.
Related papers
- FineFACE: Fair Facial Attribute Classification Leveraging Fine-grained Features [3.9440964696313485]
Research highlights the presence of demographic bias in automated facial attribute classification algorithms.
Existing bias mitigation techniques typically require demographic annotations and often obtain a trade-off between fairness and accuracy.
This paper proposes a novel approach to fair facial attribute classification by framing it as a fine-grained classification problem.
arXiv Detail & Related papers (2024-08-29T20:08:22Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Robustness Disparities in Face Detection [64.71318433419636]
We present the first of its kind detailed benchmark of face detection systems, specifically examining the robustness to noise of commercial and academic models.
Across all the datasets and systems, we generally find that photos of individuals who are $textitmasculine presenting$, of $textitolder$, of $textitdarker skin type$, or have $textitdim lighting$ are more susceptible to errors than their counterparts in other identities.
arXiv Detail & Related papers (2022-11-29T05:22:47Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Detect Faces Efficiently: A Survey and Evaluations [13.105528567365281]
Many applications including face recognition, facial expression recognition, face tracking and head-pose estimation assume that both the location and the size of faces are known in the image.
Deep learning techniques brought remarkable breakthroughs to face detection along with the price of a considerable increase in computation.
This paper introduces representative deep learning-based methods and presents a deep and thorough analysis in terms of accuracy and efficiency.
arXiv Detail & Related papers (2021-12-03T08:39:40Z) - Faces in the Wild: Efficient Gender Recognition in Surveillance
Conditions [0.0]
We present frontal and wild face versions of three well-known surveillance datasets.
We propose a model that effectively and dynamically combines facial and body information, which makes it suitable for gender recognition in wild conditions.
Our model combines facial and body information through a learnable fusion matrix and a channel-attention sub-network, focusing on the most influential body parts according to the specific image/subject features.
arXiv Detail & Related papers (2021-07-14T17:02:23Z) - Pre-training strategies and datasets for facial representation learning [58.8289362536262]
We show how to find a universal face representation that can be adapted to several facial analysis tasks and datasets.
We systematically investigate two ways of large-scale representation learning applied to faces: supervised and unsupervised pre-training.
Our main two findings are: Unsupervised pre-training on completely in-the-wild, uncurated data provides consistent and, in some cases, significant accuracy improvements.
arXiv Detail & Related papers (2021-03-30T17:57:25Z) - FusiformNet: Extracting Discriminative Facial Features on Different
Levels [0.0]
I propose FusiformNet, a novel framework for feature extraction that leverages the nature of discriminative facial features.
FusiformNet achieved a state-of-the-art accuracy of 96.67% without labeled outside data, image augmentation, normalization, or special loss functions.
Considering its ability to extract both general and local facial features, the utility of FusiformNet may not be limited to facial recognition but also extend to other DNN-based tasks.
arXiv Detail & Related papers (2020-11-01T18:00:59Z) - InsideBias: Measuring Bias in Deep Networks and Application to Face
Gender Biometrics [73.85525896663371]
This work explores the biases in learning processes based on deep neural network architectures.
We employ two gender detection models based on popular deep neural networks.
We propose InsideBias, a novel method to detect biased models.
arXiv Detail & Related papers (2020-04-14T15:20:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.