Anatomizing Bias in Facial Analysis
- URL: http://arxiv.org/abs/2112.06522v1
- Date: Mon, 13 Dec 2021 09:51:13 GMT
- Title: Anatomizing Bias in Facial Analysis
- Authors: Richa Singh, Puspita Majumdar, Surbhi Mittal, Mayank Vatsa
- Abstract summary: Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
- Score: 86.79402670904338
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing facial analysis systems have been shown to yield biased results
against certain demographic subgroups. Due to its impact on society, it has
become imperative to ensure that these systems do not discriminate based on
gender, identity, or skin tone of individuals. This has led to research in the
identification and mitigation of bias in AI systems. In this paper, we
encapsulate bias detection/estimation and mitigation algorithms for facial
analysis. Our main contributions include a systematic review of algorithms
proposed for understanding bias, along with a taxonomy and extensive overview
of existing bias mitigation algorithms. We also discuss open challenges in the
field of biased facial analysis.
Related papers
- Robustness Disparities in Face Detection [64.71318433419636]
We present the first of its kind detailed benchmark of face detection systems, specifically examining the robustness to noise of commercial and academic models.
Across all the datasets and systems, we generally find that photos of individuals who are $textitmasculine presenting$, of $textitolder$, of $textitdarker skin type$, or have $textitdim lighting$ are more susceptible to errors than their counterparts in other identities.
arXiv Detail & Related papers (2022-11-29T05:22:47Z) - Robustness Disparities in Commercial Face Detection [72.25318723264215]
We present the first of its kind detailed benchmark of the robustness of three such systems: Amazon Rekognition, Microsoft Azure, and Google Cloud Platform.
We generally find that photos of individuals who are older, masculine presenting, of darker skin type, or have dim lighting are more susceptible to errors than their counterparts in other identities.
arXiv Detail & Related papers (2021-08-27T21:37:16Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Assessing Risks of Biases in Cognitive Decision Support Systems [5.480546613836199]
This paper addresses a challenging research question on how to manage an ensemble of biases?
We provide performance projections of the cognitive Decision Support System operational landscape in terms of biases.
We also provide a motivational experiment using face biometric component of the checkpoint system which highlights the discovery of an ensemble of biases.
arXiv Detail & Related papers (2020-07-28T16:53:45Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z) - Investigating Bias in Deep Face Analysis: The KANFace Dataset and
Empirical Study [67.3961439193994]
We introduce the most comprehensive, large-scale dataset of facial images and videos to date.
The data are manually annotated in terms of identity, exact age, gender and kinship.
A method to debias network embeddings is introduced and tested on the proposed benchmarks.
arXiv Detail & Related papers (2020-05-15T00:14:39Z) - Demographic Bias: A Challenge for Fingervein Recognition Systems? [0.0]
Concerns regarding potential biases in the underlying algorithms of many automated systems (including biometrics) have been raised.
A biased algorithm produces statistically different outcomes for different groups of individuals based on certain (often protected by anti-discrimination legislation) attributes such as sex and age.
In this paper, several popular types of recognition algorithms are benchmarked to ascertain the matter for fingervein recognition.
arXiv Detail & Related papers (2020-04-03T07:53:11Z) - Demographic Bias in Biometrics: A Survey on an Emerging Challenge [0.0]
Biometric systems rely on the uniqueness of certain biological or forensics characteristics of human beings.
There has been a wave of public and academic concerns regarding the existence of systemic bias in automated decision systems.
arXiv Detail & Related papers (2020-03-05T09:07:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.