Mitigating Bias in Facial Analysis Systems by Incorporating Label
Diversity
- URL: http://arxiv.org/abs/2204.06364v1
- Date: Wed, 13 Apr 2022 13:17:27 GMT
- Title: Mitigating Bias in Facial Analysis Systems by Incorporating Label
Diversity
- Authors: Camila Kolling, Victor Araujo, Adriano Veloso and Soraia Raupp Musse
- Abstract summary: We introduce a novel learning method that combines subjective human-based labels and objective annotations based on mathematical definitions of facial traits.
Our method successfully mitigates unintended biases, while maintaining significant accuracy on the downstream task.
- Score: 4.089080285684415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Facial analysis models are increasingly applied in real-world applications
that have significant impact on peoples' lives. However, as previously shown,
models that automatically classify facial attributes might exhibit algorithmic
discrimination behavior with respect to protected groups, potentially posing
negative impacts on individuals and society. It is therefore critical to
develop techniques that can mitigate unintended biases in facial classifiers.
Hence, in this work, we introduce a novel learning method that combines both
subjective human-based labels and objective annotations based on mathematical
definitions of facial traits. Specifically, we generate new objective
annotations from a large-scale human-annotated dataset, each capturing a
different perspective of the analyzed facial trait. We then propose an ensemble
learning method, which combines individual models trained on different types of
annotations. We provide an in-depth analysis of the annotation procedure as
well as the dataset distribution. Moreover, we empirically demonstrate that, by
incorporating label diversity, and without additional synthetic images, our
method successfully mitigates unintended biases, while maintaining significant
accuracy on the downstream task.
Related papers
- Balancing the Scales: Enhancing Fairness in Facial Expression Recognition with Latent Alignment [5.784550537553534]
This workleverages representation learning based on latent spaces to mitigate bias in facial expression recognition systems.
It also enhances a deep learning model's fairness and overall accuracy.
arXiv Detail & Related papers (2024-10-25T10:03:10Z) - Gender Biases in Automatic Evaluation Metrics for Image Captioning [87.15170977240643]
We conduct a systematic study of gender biases in model-based evaluation metrics for image captioning tasks.
We demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations.
We present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments.
arXiv Detail & Related papers (2023-05-24T04:27:40Z) - Towards an objective characterization of an individual's facial
movements using Self-Supervised Person-Specific-Models [0.3441021278275805]
We present a novel training approach to learn facial movements independently of other facial characteristics.
One model per individual can learn to extract an embedding of the facial movements independently of the person's identity.
We present quantitative and qualitative evidence that this approach is easily scalable and generalizable for new individuals.
arXiv Detail & Related papers (2022-11-15T16:30:24Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Towards Intrinsic Common Discriminative Features Learning for Face
Forgery Detection using Adversarial Learning [59.548960057358435]
We propose a novel method which utilizes adversarial learning to eliminate the negative effect of different forgery methods and facial identities.
Our face forgery detection model learns to extract common discriminative features through eliminating the effect of forgery methods and facial identities.
arXiv Detail & Related papers (2022-07-08T09:23:59Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Information-Theoretic Bias Assessment Of Learned Representations Of
Pretrained Face Recognition [18.07966649678408]
We propose an information-theoretic, independent bias assessment metric to identify degree of bias against protected demographic attributes.
Our metric differs from other methods that rely on classification accuracy or examine the differences between ground truth and predicted labels of protected attributes predicted using a shallow network.
arXiv Detail & Related papers (2021-11-08T17:41:17Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Pre-training strategies and datasets for facial representation learning [58.8289362536262]
We show how to find a universal face representation that can be adapted to several facial analysis tasks and datasets.
We systematically investigate two ways of large-scale representation learning applied to faces: supervised and unsupervised pre-training.
Our main two findings are: Unsupervised pre-training on completely in-the-wild, uncurated data provides consistent and, in some cases, significant accuracy improvements.
arXiv Detail & Related papers (2021-03-30T17:57:25Z) - Model-agnostic Fits for Understanding Information Seeking Patterns in
Humans [0.0]
In decision making tasks under uncertainty, humans display characteristic biases in seeking, integrating, and acting upon information relevant to the task.
Here, we reexamine data from previous carefully designed experiments, collected at scale, that measured and catalogued these biases in aggregate form.
We design deep learning models that replicate these biases in aggregate, while also capturing individual variation in behavior.
arXiv Detail & Related papers (2020-12-09T04:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.