Toward Fair Facial Expression Recognition with Improved Distribution
Alignment
- URL: http://arxiv.org/abs/2306.06696v1
- Date: Sun, 11 Jun 2023 14:59:20 GMT
- Title: Toward Fair Facial Expression Recognition with Improved Distribution
Alignment
- Authors: Mojtaba Kolahdouzi and Ali Etemad
- Abstract summary: We present a novel approach to mitigate bias in facial expression recognition (FER) models.
Our method aims to reduce sensitive attribute information such as gender, age, or race, in the embeddings produced by FER models.
For the first time, we analyze the notion of attractiveness as an important sensitive attribute in FER models and demonstrate that FER models can indeed exhibit biases towards more attractive faces.
- Score: 19.442685015494316
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a novel approach to mitigate bias in facial expression recognition
(FER) models. Our method aims to reduce sensitive attribute information such as
gender, age, or race, in the embeddings produced by FER models. We employ a
kernel mean shrinkage estimator to estimate the kernel mean of the
distributions of the embeddings associated with different sensitive attribute
groups, such as young and old, in the Hilbert space. Using this estimation, we
calculate the maximum mean discrepancy (MMD) distance between the distributions
and incorporate it in the classifier loss along with an adversarial loss, which
is then minimized through the learning process to improve the distribution
alignment. Our method makes sensitive attributes less recognizable for the
model, which in turn promotes fairness. Additionally, for the first time, we
analyze the notion of attractiveness as an important sensitive attribute in FER
models and demonstrate that FER models can indeed exhibit biases towards more
attractive faces. To prove the efficacy of our model in reducing bias regarding
different sensitive attributes (including the newly proposed attractiveness
attribute), we perform several experiments on two widely used datasets, CelebA
and RAF-DB. The results in terms of both accuracy and fairness measures
outperform the state-of-the-art in most cases, demonstrating the effectiveness
of the proposed method.
Related papers
- Utilizing Adversarial Examples for Bias Mitigation and Accuracy Enhancement [3.0820287240219795]
We propose a novel approach to mitigate biases in computer vision models by utilizing counterfactual generation and fine-tuning.
Our approach leverages a curriculum learning framework combined with a fine-grained adversarial loss to fine-tune the model using adversarial examples.
We validate our approach through both qualitative and quantitative assessments, demonstrating improved bias mitigation and accuracy compared to existing methods.
arXiv Detail & Related papers (2024-04-18T00:41:32Z) - Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - Using Positive Matching Contrastive Loss with Facial Action Units to
mitigate bias in Facial Expression Recognition [6.015556590955814]
We propose to mitigate bias by guiding the model's focus towards task-relevant features using domain knowledge.
We show that incorporating task-relevant features via our method can improve model fairness at minimal cost to classification performance.
arXiv Detail & Related papers (2023-03-08T21:28:02Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Lightweight Facial Attractiveness Prediction Using Dual Label Distribution [16.60169799392108]
Facial attractiveness prediction (FAP) aims to assess facial attractiveness automatically based on human aesthetic perception.
We present a novel end-to-end FAP approach that integrates dual label distribution and lightweight design.
Our approach achieves promising results and succeeds in balancing performance and efficiency.
arXiv Detail & Related papers (2022-12-04T04:19:36Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Adaptive Dimension Reduction and Variational Inference for Transductive
Few-Shot Classification [2.922007656878633]
We propose a new clustering method based on Variational Bayesian inference, further improved by Adaptive Dimension Reduction.
Our proposed method significantly improves accuracy in the realistic unbalanced transductive setting on various Few-Shot benchmarks.
arXiv Detail & Related papers (2022-09-18T10:29:02Z) - Fake It Till You Make It: Near-Distribution Novelty Detection by
Score-Based Generative Models [54.182955830194445]
existing models either fail or face a dramatic drop under the so-called near-distribution" setting.
We propose to exploit a score-based generative model to produce synthetic near-distribution anomalous data.
Our method improves the near-distribution novelty detection by 6% and passes the state-of-the-art by 1% to 5% across nine novelty detection benchmarks.
arXiv Detail & Related papers (2022-05-28T02:02:53Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Learning Expectation of Label Distribution for Facial Age and
Attractiveness Estimation [65.5880700862751]
We analyze the essential relationship between two state-of-the-art methods (Ranking-CNN and DLDL) and show that the Ranking method is in fact learning label distribution implicitly.
We propose a lightweight network architecture and propose a unified framework which can jointly learn facial attribute distribution and regress attribute value.
Our method achieves new state-of-the-art results using the single model with 36$times$ fewer parameters and 3$times$ faster inference speed on facial age/attractiveness estimation.
arXiv Detail & Related papers (2020-07-03T15:46:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.