Are Face Detection Models Biased?
- URL: http://arxiv.org/abs/2211.03588v1
- Date: Mon, 7 Nov 2022 14:27:55 GMT
- Title: Are Face Detection Models Biased?
- Authors: Surbhi Mittal, Kartik Thakral, Puspita Majumdar, Mayank Vatsa, Richa
Singh
- Abstract summary: We investigate possible bias in the domain of face detection through facial region localization.
Most existing face detection datasets lack suitable annotation for such analysis.
We observe a high disparity in detection accuracies across gender and skin-tone, and interplay of confounding factors beyond demography.
- Score: 69.68854430664399
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The presence of bias in deep models leads to unfair outcomes for certain
demographic subgroups. Research in bias focuses primarily on facial recognition
and attribute prediction with scarce emphasis on face detection. Existing
studies consider face detection as binary classification into 'face' and
'non-face' classes. In this work, we investigate possible bias in the domain of
face detection through facial region localization which is currently
unexplored. Since facial region localization is an essential task for all face
recognition pipelines, it is imperative to analyze the presence of such bias in
popular deep models. Most existing face detection datasets lack suitable
annotation for such analysis. Therefore, we web-curate the Fair Face
Localization with Attributes (F2LA) dataset and manually annotate more than 10
attributes per face, including facial localization information. Utilizing the
extensive annotations from F2LA, an experimental setup is designed to study the
performance of four pre-trained face detectors. We observe (i) a high disparity
in detection accuracies across gender and skin-tone, and (ii) interplay of
confounding factors beyond demography. The F2LA data and associated annotations
can be accessed at http://iab-rubric.org/index.php/F2LA.
Related papers
- On the "Illusion" of Gender Bias in Face Recognition: Explaining the Fairness Issue Through Non-demographic Attributes [7.602456562464879]
Face recognition systems exhibit significant accuracy differences based on the user's gender.
We propose a toolchain to effectively decorrelate and aggregate facial attributes to enable a less-biased gender analysis.
Experiments show that the gender gap vanishes when images of male and female subjects share specific attributes.
arXiv Detail & Related papers (2025-01-21T10:21:19Z) - FineFACE: Fair Facial Attribute Classification Leveraging Fine-grained Features [3.9440964696313485]
Research highlights the presence of demographic bias in automated facial attribute classification algorithms.
Existing bias mitigation techniques typically require demographic annotations and often obtain a trade-off between fairness and accuracy.
This paper proposes a novel approach to fair facial attribute classification by framing it as a fine-grained classification problem.
arXiv Detail & Related papers (2024-08-29T20:08:22Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Generalized Face Liveness Detection via De-fake Face Generator [52.23271636362843]
Previous Face Anti-spoofing (FAS) methods face the challenge of generalizing to unseen domains.
We propose an Anomalous cue Guided FAS (AG-FAS) method, which can effectively leverage large-scale additional real faces.
Our method achieves state-of-the-art results under cross-domain evaluations with unseen scenarios and unknown presentation attacks.
arXiv Detail & Related papers (2024-01-17T06:59:32Z) - Robustness Disparities in Face Detection [64.71318433419636]
We present the first of its kind detailed benchmark of face detection systems, specifically examining the robustness to noise of commercial and academic models.
Across all the datasets and systems, we generally find that photos of individuals who are $textitmasculine presenting$, of $textitolder$, of $textitdarker skin type$, or have $textitdim lighting$ are more susceptible to errors than their counterparts in other identities.
arXiv Detail & Related papers (2022-11-29T05:22:47Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Detect Faces Efficiently: A Survey and Evaluations [13.105528567365281]
Many applications including face recognition, facial expression recognition, face tracking and head-pose estimation assume that both the location and the size of faces are known in the image.
Deep learning techniques brought remarkable breakthroughs to face detection along with the price of a considerable increase in computation.
This paper introduces representative deep learning-based methods and presents a deep and thorough analysis in terms of accuracy and efficiency.
arXiv Detail & Related papers (2021-12-03T08:39:40Z) - Faces in the Wild: Efficient Gender Recognition in Surveillance
Conditions [0.0]
We present frontal and wild face versions of three well-known surveillance datasets.
We propose a model that effectively and dynamically combines facial and body information, which makes it suitable for gender recognition in wild conditions.
Our model combines facial and body information through a learnable fusion matrix and a channel-attention sub-network, focusing on the most influential body parts according to the specific image/subject features.
arXiv Detail & Related papers (2021-07-14T17:02:23Z) - Pre-training strategies and datasets for facial representation learning [58.8289362536262]
We show how to find a universal face representation that can be adapted to several facial analysis tasks and datasets.
We systematically investigate two ways of large-scale representation learning applied to faces: supervised and unsupervised pre-training.
Our main two findings are: Unsupervised pre-training on completely in-the-wild, uncurated data provides consistent and, in some cases, significant accuracy improvements.
arXiv Detail & Related papers (2021-03-30T17:57:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.