Towards Fair Affective Robotics: Continual Learning for Mitigating Bias
in Facial Expression and Action Unit Recognition
- URL: http://arxiv.org/abs/2103.09233v1
- Date: Mon, 15 Mar 2021 18:36:14 GMT
- Title: Towards Fair Affective Robotics: Continual Learning for Mitigating Bias
in Facial Expression and Action Unit Recognition
- Authors: Ozgur Kara, Nikhil Churamani and Hatice Gunes
- Abstract summary: We propose Continual Learning (CL) as an effective strategy to enhance fairness in Facial Expression Recognition (FER) systems.
We compare different state-of-the-art bias mitigation approaches with CL-based strategies for fairness on expression recognition and Action Unit (AU) detection tasks.
Our experiments show that CL-based methods, on average, outperform popular bias mitigation techniques.
- Score: 5.478764356647437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As affective robots become integral in human life, these agents must be able
to fairly evaluate human affective expressions without discriminating against
specific demographic groups. Identifying bias in Machine Learning (ML) systems
as a critical problem, different approaches have been proposed to mitigate such
biases in the models both at data and algorithmic levels. In this work, we
propose Continual Learning (CL) as an effective strategy to enhance fairness in
Facial Expression Recognition (FER) systems, guarding against biases arising
from imbalances in data distributions. We compare different state-of-the-art
bias mitigation approaches with CL-based strategies for fairness on expression
recognition and Action Unit (AU) detection tasks using popular benchmarks for
each; RAF-DB and BP4D. Our experiments show that CL-based methods, on average,
outperform popular bias mitigation techniques, strengthening the need for
further investigation into CL for the development of fairer FER algorithms.
Related papers
- Group Robust Classification Without Any Group Information [5.053622900542495]
This study contends that current bias-unsupervised approaches to group robustness continue to rely on group information to achieve optimal performance.
bias labels are still crucial for effective model selection, restricting the practicality of these methods in real-world scenarios.
We propose a revised methodology for training and validating debiased models in an entirely bias-unsupervised manner.
arXiv Detail & Related papers (2023-10-28T01:29:18Z) - Continual Facial Expression Recognition: A Benchmark [3.181579197770883]
This work presents the Continual Facial Expression Recognition (ConFER) benchmark that evaluates popular CL techniques on FER tasks.
It presents a comparative analysis of several CL-based approaches on popular FER datasets such as CK+, RAF-DB, and AffectNet.
CL techniques, under different learning settings, are shown to achieve state-of-the-art (SOTA) performance across several datasets.
arXiv Detail & Related papers (2023-05-10T20:35:38Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Fair Robust Active Learning by Joint Inconsistency [22.150782414035422]
We introduce a novel task, Fair Robust Active Learning (FRAL), integrating conventional FAL and adversarial robustness.
We develop a simple yet effective FRAL strategy by Joint INconsistency (JIN)
Our method exploits the prediction inconsistency between benign and adversarial samples as well as between standard and robust models.
arXiv Detail & Related papers (2022-09-22T01:56:41Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Learning from Heterogeneous Data Based on Social Interactions over
Graphs [58.34060409467834]
This work proposes a decentralized architecture, where individual agents aim at solving a classification problem while observing streaming features of different dimensions.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
arXiv Detail & Related papers (2021-12-17T12:47:18Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Domain-Incremental Continual Learning for Mitigating Bias in Facial
Expression and Action Unit Recognition [5.478764356647437]
We propose the novel usage of Continual Learning (CL) as a potent bias mitigation method to enhance the fairness of FER systems.
We compare different non-CL-based and CL-based methods for their classification accuracy and fairness scores on expression recognition and Action Unit (AU) detection tasks.
Our experimental results show that CL-based methods, on average, outperform other popular bias mitigation techniques on both accuracy and fairness metrics.
arXiv Detail & Related papers (2021-03-15T18:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.