A Set of Distinct Facial Traits Learned by Machines Is Not Predictive of
Appearance Bias in the Wild
- URL: http://arxiv.org/abs/2002.05636v3
- Date: Wed, 13 Jan 2021 17:15:05 GMT
- Title: A Set of Distinct Facial Traits Learned by Machines Is Not Predictive of
Appearance Bias in the Wild
- Authors: Ryan Steed and Aylin Caliskan
- Abstract summary: We train a learning model on human subjects' first impressions of personality traits in other faces as measured by social psychologists.
We find that features extracted with FaceNet can be used to predict human appearance bias scores for deliberately manipulated faces.
In contrast to work with human biases in social psychology, the model does not find a significant signal correlating politicians' vote shares with perceived competence bias.
- Score: 3.0349733976070015
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Research in social psychology has shown that people's biased, subjective
judgments about another's personality based solely on their appearance are not
predictive of their actual personality traits. But researchers and companies
often utilize computer vision models to predict similarly subjective
personality attributes such as "employability." We seek to determine whether
state-of-the-art, black box face processing technology can learn human-like
appearance biases. With features extracted with FaceNet, a widely used face
recognition framework, we train a transfer learning model on human subjects'
first impressions of personality traits in other faces as measured by social
psychologists. We find that features extracted with FaceNet can be used to
predict human appearance bias scores for deliberately manipulated faces but not
for randomly generated faces scored by humans. Additionally, in contrast to
work with human biases in social psychology, the model does not find a
significant signal correlating politicians' vote shares with perceived
competence bias. With Local Interpretable Model-Agnostic Explanations (LIME),
we provide several explanations for this discrepancy. Our results suggest that
some signals of appearance bias documented in social psychology are not
embedded by the machine learning techniques we investigate. We shed light on
the ways in which appearance bias could be embedded in face processing
technology and cast further doubt on the practice of predicting subjective
traits based on appearances.
Related papers
- Social perception of faces in a vision-language model [11.933952003478172]
We explore social perception of human faces in CLIP, a widely used open-source vision-language model.
We find that age, gender, and race do systematically impact CLIP's social perception of faces.
We find a strong pattern of bias concerning the faces of Black women.
arXiv Detail & Related papers (2024-08-26T17:21:54Z) - Subjective Face Transform using Human First Impressions [5.026535087391025]
This work uses generative models to find semantically meaningful edits to a face image that change perceived attributes.
We train on real and synthetic faces, evaluate for in-domain and out-of-domain images using predictive models and human ratings.
arXiv Detail & Related papers (2023-09-27T03:21:07Z) - Robustness Disparities in Face Detection [64.71318433419636]
We present the first of its kind detailed benchmark of face detection systems, specifically examining the robustness to noise of commercial and academic models.
Across all the datasets and systems, we generally find that photos of individuals who are $textitmasculine presenting$, of $textitolder$, of $textitdarker skin type$, or have $textitdim lighting$ are more susceptible to errors than their counterparts in other identities.
arXiv Detail & Related papers (2022-11-29T05:22:47Z) - Are Face Detection Models Biased? [69.68854430664399]
We investigate possible bias in the domain of face detection through facial region localization.
Most existing face detection datasets lack suitable annotation for such analysis.
We observe a high disparity in detection accuracies across gender and skin-tone, and interplay of confounding factors beyond demography.
arXiv Detail & Related papers (2022-11-07T14:27:55Z) - The Tail Wagging the Dog: Dataset Construction Biases of Social Bias
Benchmarks [75.58692290694452]
We compare social biases with non-social biases stemming from choices made during dataset construction that might not even be discernible to the human eye.
We observe that these shallow modifications have a surprising effect on the resulting degree of bias across various models.
arXiv Detail & Related papers (2022-10-18T17:58:39Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - Person Perception Biases Exposed: Revisiting the First Impressions
Dataset [26.412669618149106]
This work revisits the ChaLearn First Impressions database, annotated for personality perception using pairwise comparisons via crowdsourcing.
We reveal existing person perception biases associated to perceived attributes like gender, ethnicity, age and face attractiveness.
arXiv Detail & Related papers (2020-11-30T15:41:27Z) - Image Representations Learned With Unsupervised Pre-Training Contain
Human-like Biases [3.0349733976070015]
We develop a novel method for quantifying biased associations between representations of social concepts and attributes in images.
We find that state-of-the-art unsupervised models trained on ImageNet, a popular benchmark image dataset, automatically learn racial, gender, and intersectional biases.
arXiv Detail & Related papers (2020-10-28T15:55:49Z) - Salient Facial Features from Humans and Deep Neural Networks [2.5211876507510724]
We explore the features that are used by humans and by convolutional neural networks (ConvNets) to classify faces.
We use Guided Backpropagation (GB) to visualize the facial features that influence the output of a ConvNet the most when identifying specific individuals.
arXiv Detail & Related papers (2020-03-08T22:41:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.