GenderRobustness: Robustness of Gender Detection in Facial Recognition
Systems with variation in Image Properties
- URL: http://arxiv.org/abs/2011.10472v2
- Date: Thu, 26 Nov 2020 22:18:15 GMT
- Title: GenderRobustness: Robustness of Gender Detection in Facial Recognition
Systems with variation in Image Properties
- Authors: Sharadha Srinivasan, Madan Musuvathi
- Abstract summary: There have been increasing accusations on artificial intelligence systems and algorithms of computer vision of possessing implicit biases.
One such class of systems where bias is said to exist is facial recognition systems, where bias has been observed on the basis of gender, ethnicity, skin tone and other facial attributes.
Developers of these systems to ensure that the bias is kept to a bare minimum or ideally non-existent.
- Score: 0.5330240017302619
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In recent times, there have been increasing accusations on artificial
intelligence systems and algorithms of computer vision of possessing implicit
biases. Even though these conversations are more prevalent now and systems are
improving by performing extensive testing and broadening their horizon, biases
still do exist. One such class of systems where bias is said to exist is facial
recognition systems, where bias has been observed on the basis of gender,
ethnicity, skin tone and other facial attributes. This is even more disturbing,
given the fact that these systems are used in practically every sector of the
industries today. From as critical as criminal identification to as simple as
getting your attendance registered, these systems have gained a huge market,
especially in recent years. That in itself is a good enough reason for
developers of these systems to ensure that the bias is kept to a bare minimum
or ideally non-existent, to avoid major issues like favoring a particular
gender, race, or class of people or rather making a class of people susceptible
to false accusations due to inability of these systems to correctly recognize
those people.
Related papers
- Lookism: The overlooked bias in computer vision [11.306732956100213]
Lookism remains under-explored in computer vision but can have profound implications.
This paper advocates for the systematic study of lookism as a critical bias in computer vision models.
arXiv Detail & Related papers (2024-08-21T09:07:20Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z) - Robustness Disparities in Face Detection [64.71318433419636]
We present the first of its kind detailed benchmark of face detection systems, specifically examining the robustness to noise of commercial and academic models.
Across all the datasets and systems, we generally find that photos of individuals who are $textitmasculine presenting$, of $textitolder$, of $textitdarker skin type$, or have $textitdim lighting$ are more susceptible to errors than their counterparts in other identities.
arXiv Detail & Related papers (2022-11-29T05:22:47Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Robustness Disparities in Commercial Face Detection [72.25318723264215]
We present the first of its kind detailed benchmark of the robustness of three such systems: Amazon Rekognition, Microsoft Azure, and Google Cloud Platform.
We generally find that photos of individuals who are older, masculine presenting, of darker skin type, or have dim lighting are more susceptible to errors than their counterparts in other identities.
arXiv Detail & Related papers (2021-08-27T21:37:16Z) - Biometrics: Trust, but Verify [49.9641823975828]
Biometric recognition has exploded into a plethora of different applications around the globe.
There are a number of outstanding problems and concerns pertaining to the various sub-modules of biometric recognition systems.
arXiv Detail & Related papers (2021-05-14T03:07:25Z) - Gender Slopes: Counterfactual Fairness for Computer Vision Models by
Attribute Manipulation [4.784524967912113]
Automated computer vision systems have been applied in many domains including security, law enforcement, and personal devices.
Recent reports suggest that these systems may produce biased results, discriminating against people in certain demographic groups.
We propose to use an encoder-decoder network developed for image manipulation to synthesize facial images varying in the dimensions of gender and race.
arXiv Detail & Related papers (2020-05-21T02:33:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.