Gender Slopes: Counterfactual Fairness for Computer Vision Models by
Attribute Manipulation
- URL: http://arxiv.org/abs/2005.10430v1
- Date: Thu, 21 May 2020 02:33:28 GMT
- Title: Gender Slopes: Counterfactual Fairness for Computer Vision Models by
Attribute Manipulation
- Authors: Jungseock Joo, Kimmo K\"arkk\"ainen
- Abstract summary: Automated computer vision systems have been applied in many domains including security, law enforcement, and personal devices.
Recent reports suggest that these systems may produce biased results, discriminating against people in certain demographic groups.
We propose to use an encoder-decoder network developed for image manipulation to synthesize facial images varying in the dimensions of gender and race.
- Score: 4.784524967912113
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated computer vision systems have been applied in many domains including
security, law enforcement, and personal devices, but recent reports suggest
that these systems may produce biased results, discriminating against people in
certain demographic groups. Diagnosing and understanding the underlying true
causes of model biases, however, are challenging tasks because modern computer
vision systems rely on complex black-box models whose behaviors are hard to
decode. We propose to use an encoder-decoder network developed for image
attribute manipulation to synthesize facial images varying in the dimensions of
gender and race while keeping other signals intact. We use these synthesized
images to measure counterfactual fairness of commercial computer vision
classifiers by examining the degree to which these classifiers are affected by
gender and racial cues controlled in the images, e.g., feminine faces may
elicit higher scores for the concept of nurse and lower scores for STEM-related
concepts. We also report the skewed gender representations in an online search
service on profession-related keywords, which may explain the origin of the
biases encoded in the models.
Related papers
- When Does Perceptual Alignment Benefit Vision Representations? [76.32336818860965]
We investigate how aligning vision model representations to human perceptual judgments impacts their usability.
We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks.
Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
arXiv Detail & Related papers (2024-10-14T17:59:58Z) - Synthetic Counterfactual Faces [1.3062016289815055]
We build a generative AI framework to construct targeted, counterfactual, high-quality synthetic face data.
Our pipeline has many use cases, including face recognition systems sensitivity evaluations and image understanding system probes.
We showcase the efficacy of our face generation pipeline on a leading commercial vision model.
arXiv Detail & Related papers (2024-07-18T22:22:49Z) - FACET: Fairness in Computer Vision Evaluation Benchmark [21.862644380063756]
Computer vision models have known performance disparities across attributes such as gender and skin tone.
We present a new benchmark named FACET (FAirness in Computer Vision EvaluaTion)
FACET is a large, publicly available evaluation set of 32k images for some of the most common vision tasks.
arXiv Detail & Related papers (2023-08-31T17:59:48Z) - Gender Biases in Automatic Evaluation Metrics for Image Captioning [87.15170977240643]
We conduct a systematic study of gender biases in model-based evaluation metrics for image captioning tasks.
We demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations.
We present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments.
arXiv Detail & Related papers (2023-05-24T04:27:40Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - Fairness Indicators for Systematic Assessments of Visual Feature
Extractors [21.141633753573764]
We propose three fairness indicators, which aim at quantifying harms and biases of visual systems.
Our indicators use existing publicly available datasets collected for fairness evaluations.
These indicators are not intended to be a substitute for a thorough analysis of the broader impact of the new computer vision technologies.
arXiv Detail & Related papers (2022-02-15T17:45:33Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - Robustness Disparities in Commercial Face Detection [72.25318723264215]
We present the first of its kind detailed benchmark of the robustness of three such systems: Amazon Rekognition, Microsoft Azure, and Google Cloud Platform.
We generally find that photos of individuals who are older, masculine presenting, of darker skin type, or have dim lighting are more susceptible to errors than their counterparts in other identities.
arXiv Detail & Related papers (2021-08-27T21:37:16Z) - GenderRobustness: Robustness of Gender Detection in Facial Recognition
Systems with variation in Image Properties [0.5330240017302619]
There have been increasing accusations on artificial intelligence systems and algorithms of computer vision of possessing implicit biases.
One such class of systems where bias is said to exist is facial recognition systems, where bias has been observed on the basis of gender, ethnicity, skin tone and other facial attributes.
Developers of these systems to ensure that the bias is kept to a bare minimum or ideally non-existent.
arXiv Detail & Related papers (2020-11-18T18:13:23Z) - Image Representations Learned With Unsupervised Pre-Training Contain
Human-like Biases [3.0349733976070015]
We develop a novel method for quantifying biased associations between representations of social concepts and attributes in images.
We find that state-of-the-art unsupervised models trained on ImageNet, a popular benchmark image dataset, automatically learn racial, gender, and intersectional biases.
arXiv Detail & Related papers (2020-10-28T15:55:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.