Lookism: The overlooked bias in computer vision
- URL: http://arxiv.org/abs/2408.11448v1
- Date: Wed, 21 Aug 2024 09:07:20 GMT
- Title: Lookism: The overlooked bias in computer vision
- Authors: Aditya Gulati, Bruno Lepri, Nuria Oliver,
- Abstract summary: Lookism remains under-explored in computer vision but can have profound implications.
This paper advocates for the systematic study of lookism as a critical bias in computer vision models.
- Score: 11.306732956100213
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, there have been significant advancements in computer vision which have led to the widespread deployment of image recognition and generation systems in socially relevant applications, from hiring to security screening. However, the prevalence of biases within these systems has raised significant ethical and social concerns. The most extensively studied biases in this context are related to gender, race and age. Yet, other biases are equally pervasive and harmful, such as lookism, i.e., the preferential treatment of individuals based on their physical appearance. Lookism remains under-explored in computer vision but can have profound implications not only by perpetuating harmful societal stereotypes but also by undermining the fairness and inclusivity of AI technologies. Thus, this paper advocates for the systematic study of lookism as a critical bias in computer vision models. Through a comprehensive review of existing literature, we identify three areas of intersection between lookism and computer vision. We illustrate them by means of examples and a user study. We call for an interdisciplinary approach to address lookism, urging researchers, developers, and policymakers to prioritize the development of equitable computer vision systems that respect and reflect the diversity of human appearances.
Related papers
- When Does Perceptual Alignment Benefit Vision Representations? [76.32336818860965]
We investigate how aligning vision model representations to human perceptual judgments impacts their usability.
We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks.
Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
arXiv Detail & Related papers (2024-10-14T17:59:58Z) - Fairness and Bias Mitigation in Computer Vision: A Survey [61.01658257223365]
Computer vision systems are increasingly being deployed in high-stakes real-world applications.
There is a dire need to ensure that they do not propagate or amplify any discriminatory tendencies in historical or human-curated data.
This paper presents a comprehensive survey on fairness that summarizes and sheds light on ongoing trends and successes in the context of computer vision.
arXiv Detail & Related papers (2024-08-05T13:44:22Z) - Robustness Disparities in Face Detection [64.71318433419636]
We present the first of its kind detailed benchmark of face detection systems, specifically examining the robustness to noise of commercial and academic models.
Across all the datasets and systems, we generally find that photos of individuals who are $textitmasculine presenting$, of $textitolder$, of $textitdarker skin type$, or have $textitdim lighting$ are more susceptible to errors than their counterparts in other identities.
arXiv Detail & Related papers (2022-11-29T05:22:47Z) - Debiasing Methods for Fairer Neural Models in Vision and Language
Research: A Survey [3.4767443062432326]
We provide an in-depth overview of the main debiasing methods for fairness-aware neural networks.
We propose a novel taxonomy to better organize the literature on debiasing methods for fairness.
arXiv Detail & Related papers (2022-11-10T14:42:46Z) - Fairness Indicators for Systematic Assessments of Visual Feature
Extractors [21.141633753573764]
We propose three fairness indicators, which aim at quantifying harms and biases of visual systems.
Our indicators use existing publicly available datasets collected for fairness evaluations.
These indicators are not intended to be a substitute for a thorough analysis of the broader impact of the new computer vision technologies.
arXiv Detail & Related papers (2022-02-15T17:45:33Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Robustness Disparities in Commercial Face Detection [72.25318723264215]
We present the first of its kind detailed benchmark of the robustness of three such systems: Amazon Rekognition, Microsoft Azure, and Google Cloud Platform.
We generally find that photos of individuals who are older, masculine presenting, of darker skin type, or have dim lighting are more susceptible to errors than their counterparts in other identities.
arXiv Detail & Related papers (2021-08-27T21:37:16Z) - GenderRobustness: Robustness of Gender Detection in Facial Recognition
Systems with variation in Image Properties [0.5330240017302619]
There have been increasing accusations on artificial intelligence systems and algorithms of computer vision of possessing implicit biases.
One such class of systems where bias is said to exist is facial recognition systems, where bias has been observed on the basis of gender, ethnicity, skin tone and other facial attributes.
Developers of these systems to ensure that the bias is kept to a bare minimum or ideally non-existent.
arXiv Detail & Related papers (2020-11-18T18:13:23Z) - Gender Slopes: Counterfactual Fairness for Computer Vision Models by
Attribute Manipulation [4.784524967912113]
Automated computer vision systems have been applied in many domains including security, law enforcement, and personal devices.
Recent reports suggest that these systems may produce biased results, discriminating against people in certain demographic groups.
We propose to use an encoder-decoder network developed for image manipulation to synthesize facial images varying in the dimensions of gender and race.
arXiv Detail & Related papers (2020-05-21T02:33:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.