Bias and Fairness on Multimodal Emotion Detection Algorithms
- URL: http://arxiv.org/abs/2205.08383v1
- Date: Wed, 11 May 2022 20:03:25 GMT
- Title: Bias and Fairness on Multimodal Emotion Detection Algorithms
- Authors: Matheus Schmitz, Rehan Ahmed, Jimi Cao
- Abstract summary: We study how multimodal approaches affect system bias and fairness.
We find that text alone has the least bias, and accounts for the majority of the models' performances.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerous studies have shown that machine learning algorithms can latch onto
protected attributes such as race and gender and generate predictions that
systematically discriminate against one or more groups. To date the majority of
bias and fairness research has been on unimodal models. In this work, we
explore the biases that exist in emotion recognition systems in relationship to
the modalities utilized, and study how multimodal approaches affect system bias
and fairness. We consider audio, text, and video modalities, as well as all
possible multimodal combinations of those, and find that text alone has the
least bias, and accounts for the majority of the models' performances, raising
doubts about the worthiness of multimodal emotion recognition systems when bias
and fairness are desired alongside model performance.
Related papers
- MABR: A Multilayer Adversarial Bias Removal Approach Without Prior Bias Knowledge [6.208151505901749]
Models trained on real-world data often mirror and exacerbate existing social biases.
We introduce a novel adversarial training strategy that operates independently of prior bias-type knowledge.
Our method effectively reduces social biases without the need for demographic annotations.
arXiv Detail & Related papers (2024-08-10T09:11:01Z) - Fairness and Bias in Multimodal AI: A Survey [0.20971479389679337]
The importance of addressing fairness and bias in artificial intelligence (AI) systems cannot be over-emphasized.
We fill a gap with regards to the relatively minimal study of fairness and bias in Large Multimodal Models (LMMs) compared to Large Language Models (LLMs)
We provide 50 examples of datasets and models related to both types of AI along with the challenges of bias affecting them.
arXiv Detail & Related papers (2024-06-27T11:26:17Z) - A Multi-Task, Multi-Modal Approach for Predicting Categorical and
Dimensional Emotions [0.0]
We propose a multi-task, multi-modal system that predicts categorical and dimensional emotions.
Results emphasise the importance of cross-regularisation between the two types of emotions.
arXiv Detail & Related papers (2023-12-31T16:48:03Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - A Multibias-mitigated and Sentiment Knowledge Enriched Transformer for
Debiasing in Multimodal Conversational Emotion Recognition [9.020664590692705]
Multimodal emotion recognition in conversations (mERC) is an active research topic in natural language processing (NLP)
Innumerable implicit prejudices and preconceptions fill human language and conversations.
Existing data-driven mERC approaches may offer higher emotional scores on utterances by females than males.
arXiv Detail & Related papers (2022-07-17T08:16:49Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.