Domain-Incremental Continual Learning for Mitigating Bias in Facial
Expression and Action Unit Recognition
- URL: http://arxiv.org/abs/2103.08637v1
- Date: Mon, 15 Mar 2021 18:22:17 GMT
- Title: Domain-Incremental Continual Learning for Mitigating Bias in Facial
Expression and Action Unit Recognition
- Authors: Nikhil Churamani, Ozgur Kara and Hatice Gunes
- Abstract summary: We propose the novel usage of Continual Learning (CL) as a potent bias mitigation method to enhance the fairness of FER systems.
We compare different non-CL-based and CL-based methods for their classification accuracy and fairness scores on expression recognition and Action Unit (AU) detection tasks.
Our experimental results show that CL-based methods, on average, outperform other popular bias mitigation techniques on both accuracy and fairness metrics.
- Score: 5.478764356647437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Facial Expression Recognition (FER) systems become integrated into our
daily lives, these systems need to prioritise making fair decisions instead of
aiming at higher individual accuracy scores. Ranging from surveillance systems
to diagnosing mental and emotional health conditions of individuals, these
systems need to balance the accuracy vs fairness trade-off to make decisions
that do not unjustly discriminate against specific under-represented
demographic groups. Identifying bias as a critical problem in facial analysis
systems, different methods have been proposed that aim to mitigate bias both at
data and algorithmic levels. In this work, we propose the novel usage of
Continual Learning (CL), in particular, using Domain-Incremental Learning
(Domain-IL) settings, as a potent bias mitigation method to enhance the
fairness of FER systems while guarding against biases arising from skewed data
distributions. We compare different non-CL-based and CL-based methods for their
classification accuracy and fairness scores on expression recognition and
Action Unit (AU) detection tasks using two popular benchmarks, the RAF-DB and
BP4D datasets, respectively. Our experimental results show that CL-based
methods, on average, outperform other popular bias mitigation techniques on
both accuracy and fairness metrics.
Related papers
- Improving Bias in Facial Attribute Classification: A Combined Impact of KL Divergence induced Loss Function and Dual Attention [3.5527561584422465]
Earlier systems often exhibited demographic bias, particularly in gender and racial classification, with lower accuracy for women and individuals with darker skin tones.
This paper presents a method using a dual attention mechanism with a pre-trained Inception-ResNet V1 model, enhanced by KL-divergence regularization and a cross-entropy loss function.
The experimental results show significant improvements in both fairness and classification accuracy, providing promising advances in addressing bias and enhancing the reliability of facial recognition systems.
arXiv Detail & Related papers (2024-10-15T01:29:09Z) - Comprehensive Equity Index (CEI): Definition and Application to Bias Evaluation in Biometrics [47.762333925222926]
We present a novel metric to quantify biased behaviors of machine learning models.
We focus on and apply it to the operational evaluation of face recognition systems.
arXiv Detail & Related papers (2024-09-03T14:19:38Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Improving Fairness of AI Systems with Lossless De-biasing [15.039284892391565]
Mitigating bias in AI systems to increase overall fairness has emerged as an important challenge.
We present an information-lossless de-biasing technique that targets the scarcity of data in the disadvantaged group.
arXiv Detail & Related papers (2021-05-10T17:38:38Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Towards Fair Affective Robotics: Continual Learning for Mitigating Bias
in Facial Expression and Action Unit Recognition [5.478764356647437]
We propose Continual Learning (CL) as an effective strategy to enhance fairness in Facial Expression Recognition (FER) systems.
We compare different state-of-the-art bias mitigation approaches with CL-based strategies for fairness on expression recognition and Action Unit (AU) detection tasks.
Our experiments show that CL-based methods, on average, outperform popular bias mitigation techniques.
arXiv Detail & Related papers (2021-03-15T18:36:14Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.