Person Perception Biases Exposed: Revisiting the First Impressions
Dataset
- URL: http://arxiv.org/abs/2011.14906v1
- Date: Mon, 30 Nov 2020 15:41:27 GMT
- Title: Person Perception Biases Exposed: Revisiting the First Impressions
Dataset
- Authors: Julio C. S. Jacques Junior, Agata Lapedriza, Cristina Palmero, Xavier
Bar\'o and Sergio Escalera
- Abstract summary: This work revisits the ChaLearn First Impressions database, annotated for personality perception using pairwise comparisons via crowdsourcing.
We reveal existing person perception biases associated to perceived attributes like gender, ethnicity, age and face attractiveness.
- Score: 26.412669618149106
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work revisits the ChaLearn First Impressions database, annotated for
personality perception using pairwise comparisons via crowdsourcing. We analyse
for the first time the original pairwise annotations, and reveal existing
person perception biases associated to perceived attributes like gender,
ethnicity, age and face attractiveness. We show how person perception bias can
influence data labelling of a subjective task, which has received little
attention from the computer vision and machine learning communities by now. We
further show that the mechanism used to convert pairwise annotations to
continuous values may magnify the biases if no special treatment is considered.
The findings of this study are relevant for the computer vision community that
is still creating new datasets on subjective tasks, and using them for
practical applications, ignoring these perceptual biases.
Related papers
- Balancing the Scales: Enhancing Fairness in Facial Expression Recognition with Latent Alignment [5.784550537553534]
This workleverages representation learning based on latent spaces to mitigate bias in facial expression recognition systems.
It also enhances a deep learning model's fairness and overall accuracy.
arXiv Detail & Related papers (2024-10-25T10:03:10Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Evaluating Bias and Fairness in Gender-Neutral Pretrained
Vision-and-Language Models [23.65626682262062]
We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models.
Overall, we find that bias amplification in pretraining and after fine-tuning are independent.
arXiv Detail & Related papers (2023-10-26T16:19:19Z) - Gender Biases in Automatic Evaluation Metrics for Image Captioning [87.15170977240643]
We conduct a systematic study of gender biases in model-based evaluation metrics for image captioning tasks.
We demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations.
We present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments.
arXiv Detail & Related papers (2023-05-24T04:27:40Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Personalized Detection of Cognitive Biases in Actions of Users from
Their Logs: Anchoring and Recency Biases [9.445205340175555]
We focus on two cognitive biases - anchoring and recency.
The recognition of cognitive bias in computer science is largely in the domain of information retrieval.
We offer a principled approach along with Machine Learning to detect these two cognitive biases from Web logs of users' actions.
arXiv Detail & Related papers (2022-06-30T08:51:15Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - Uncovering the Bias in Facial Expressions [0.0]
We train a neural network for Action Unit classification and analyze its performance quantitatively based on its accuracy and qualitatively based on heatmaps.
A structured review of our results indicates that we are able to detect bias.
Even though we cannot conclude from our results that lower classification performance emerged solely from gender and skin color bias, these biases must be addressed.
arXiv Detail & Related papers (2020-11-23T10:20:10Z) - REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets [64.76453161039973]
REVISE (REvealing VIsual biaSEs) is a tool that assists in the investigation of a visual dataset.
It surfacing potential biases along three dimensions: (1) object-based, (2) person-based, and (3) geography-based.
arXiv Detail & Related papers (2020-04-16T23:54:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.