Fairness and Bias in Multimodal AI: A Survey
- URL: http://arxiv.org/abs/2406.19097v1
- Date: Thu, 27 Jun 2024 11:26:17 GMT
- Title: Fairness and Bias in Multimodal AI: A Survey
- Authors: Tosin Adewumi, Lama Alkhaled, Namrata Gurung, Goya van Boven, Irene Pagliai,
- Abstract summary: We fill a gap with regards to the minimal study of fairness and bias in Large Multimodal Models.
We identify a new category of quantifying bias (preuse)
We critically discuss the various ways researchers are addressing these challenges.
- Score: 0.20971479389679337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The importance of addressing fairness and bias in artificial intelligence (AI) systems cannot be over-emphasized. Mainstream media has been awashed with news of incidents around stereotypes and bias in many of these systems in recent years. In this survey, we fill a gap with regards to the minimal study of fairness and bias in Large Multimodal Models (LMMs) compared to Large Language Models (LLMs), providing 50 examples of datasets and models along with the challenges affecting them; we identify a new category of quantifying bias (preuse), in addition to the two well-known ones in the literature: intrinsic and extrinsic; we critically discuss the various ways researchers are addressing these challenges. Our method involved two slightly different search queries on Google Scholar, which revealed that 33,400 and 538,000 links are the results for the terms "Fairness and bias in Large Multimodal Models" and "Fairness and bias in Large Language Models", respectively. We believe this work contributes to filling this gap and providing insight to researchers and other stakeholders on ways to address the challenge of fairness and bias in multimodal A!.
Related papers
- VLBiasBench: A Comprehensive Benchmark for Evaluating Bias in Large Vision-Language Model [72.13121434085116]
VLBiasBench is a benchmark aimed at evaluating biases in Large Vision-Language Models (LVLMs)
We construct a dataset encompassing nine distinct categories of social biases, including age, disability status, gender, nationality, physical appearance, race, religion, profession, social economic status and two intersectional bias categories (race x gender, and race x social economic status)
We conduct extensive evaluations on 15 open-source models as well as one advanced closed-source model, providing some new insights into the biases revealing from these models.
arXiv Detail & Related papers (2024-06-20T10:56:59Z) - Unifying Bias and Unfairness in Information Retrieval: A Survey of Challenges and Opportunities with Large Language Models [31.199796752545478]
Information retrieval systems, such as search engines and recommender systems, have undergone a significant paradigm shift.
With the rapid advancement of large language models (LLMs), information retrieval systems, such as search engines and recommender systems, have undergone a significant paradigm shift.
arXiv Detail & Related papers (2024-04-17T15:05:03Z) - MAFIA: Multi-Adapter Fused Inclusive LanguAge Models [13.793816113015513]
Pretrained Language Models (PLMs) are widely used in NLP for various tasks.
Recent studies have identified various biases that such models exhibit and have proposed methods to correct these biases.
We propose a debiasing model that exploits the synergy amongst various societal biases and enables multi-bias debiasing simultaneously.
arXiv Detail & Related papers (2024-02-12T09:41:00Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Assessing Demographic Bias Transfer from Dataset to Model: A Case Study
in Facial Expression Recognition [1.5340540198612824]
Two metrics focus on the representational and stereotypical bias of the dataset, and the third one on the residual bias of the trained model.
We demonstrate the usefulness of the metrics by applying them to a FER problem based on the popular Affectnet dataset.
arXiv Detail & Related papers (2022-05-20T09:40:42Z) - Bias and Fairness on Multimodal Emotion Detection Algorithms [0.0]
We study how multimodal approaches affect system bias and fairness.
We find that text alone has the least bias, and accounts for the majority of the models' performances.
arXiv Detail & Related papers (2022-05-11T20:03:25Z) - Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural
Networks [7.763173131630868]
We propose two metrics to quantitatively evaluate the class-wise bias of two models in comparison to one another.
By evaluating the performance of these new metrics and by demonstrating their practical application, we show that they can be used to measure fairness as well as bias.
arXiv Detail & Related papers (2021-10-08T22:35:34Z) - UnQovering Stereotyping Biases via Underspecified Questions [68.81749777034409]
We present UNQOVER, a framework to probe and quantify biases through underspecified questions.
We show that a naive use of model scores can lead to incorrect bias estimates due to two forms of reasoning errors.
We use this metric to analyze four important classes of stereotypes: gender, nationality, ethnicity, and religion.
arXiv Detail & Related papers (2020-10-06T01:49:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.