Robustness Disparities in Face Detection
- URL: http://arxiv.org/abs/2211.15937v1
- Date: Tue, 29 Nov 2022 05:22:47 GMT
- Title: Robustness Disparities in Face Detection
- Authors: Samuel Dooley, George Z. Wei, Tom Goldstein, John P. Dickerson
- Abstract summary: We present the first of its kind detailed benchmark of face detection systems, specifically examining the robustness to noise of commercial and academic models.
Across all the datasets and systems, we generally find that photos of individuals who are $textitmasculine presenting$, of $textitolder$, of $textitdarker skin type$, or have $textitdim lighting$ are more susceptible to errors than their counterparts in other identities.
- Score: 64.71318433419636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial analysis systems have been deployed by large companies and critiqued
by scholars and activists for the past decade. Many existing algorithmic audits
examine the performance of these systems on later stage elements of facial
analysis systems like facial recognition and age, emotion, or perceived gender
prediction; however, a core component to these systems has been vastly
understudied from a fairness perspective: face detection, sometimes called face
localization. Since face detection is a pre-requisite step in facial analysis
systems, the bias we observe in face detection will flow downstream to the
other components like facial recognition and emotion prediction. Additionally,
no prior work has focused on the robustness of these systems under various
perturbations and corruptions, which leaves open the question of how various
people are impacted by these phenomena. We present the first of its kind
detailed benchmark of face detection systems, specifically examining the
robustness to noise of commercial and academic models. We use both standard and
recently released academic facial datasets to quantitatively analyze trends in
face detection robustness. Across all the datasets and systems, we generally
find that photos of individuals who are $\textit{masculine presenting}$,
$\textit{older}$, of $\textit{darker skin type}$, or have $\textit{dim
lighting}$ are more susceptible to errors than their counterparts in other
identities.
Related papers
- Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - A Comparative Study of Face Detection Algorithms for Masked Face
Detection [0.0]
A subclass of the face detection problem that has recently gained increasing attention is occluded face detection.
Three years on since the advent of the COVID-19 pandemic, there is still a complete lack of evidence regarding how well existing face detection algorithms perform on masked faces.
This article first offers a brief review of state-of-the-art face detectors and detectors made for the masked face problem, along with a review of the existing masked face datasets.
We evaluate and compare the performances of a well-representative set of face detectors at masked face detection and conclude with a discussion on the possible contributing factors to
arXiv Detail & Related papers (2023-05-18T16:03:37Z) - Psychophysical Evaluation of Human Performance in Detecting Digital Face
Image Manipulations [14.63266615325105]
This work introduces a web-based, remote visual discrimination experiment on the basis of principles adopted from the field of psychophysics.
We examine human proficiency in detecting different types of digitally manipulated face images, specifically face swapping, morphing, and retouching.
arXiv Detail & Related papers (2022-01-28T12:45:33Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Evaluation of Human and Machine Face Detection using a Novel Distinctive
Human Appearance Dataset [0.76146285961466]
We evaluate current state-of-the-art face-detection models in their ability to detect faces in images.
The evaluation results show that face-detection algorithms do not generalize well to diverse appearances.
arXiv Detail & Related papers (2021-11-01T02:20:40Z) - Robustness Disparities in Commercial Face Detection [72.25318723264215]
We present the first of its kind detailed benchmark of the robustness of three such systems: Amazon Rekognition, Microsoft Azure, and Google Cloud Platform.
We generally find that photos of individuals who are older, masculine presenting, of darker skin type, or have dim lighting are more susceptible to errors than their counterparts in other identities.
arXiv Detail & Related papers (2021-08-27T21:37:16Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.