Bias and unfairness in machine learning models: a systematic literature
review
- URL: http://arxiv.org/abs/2202.08176v1
- Date: Wed, 16 Feb 2022 16:27:00 GMT
- Title: Bias and unfairness in machine learning models: a systematic literature
review
- Authors: Tiago Palma Pagano, Rafael Bessa Loureiro, Maira Matos Araujo,
Fernanda Vitoria Nascimento Lisboa, Rodrigo Matos Peixoto, Guilherme Aragao
de Sousa Guimaraes, Lucas Lisboa dos Santos, Gustavo Oliveira Ramos Cruz,
Ewerton Lopes Silva de Oliveira, Marco Cruz, Ingrid Winkler, Erick Giovani
Sperandio Nascimento
- Abstract summary: This study aims to examine existing knowledge on bias and unfairness in Machine Learning models.
A Systematic Literature Review found 40 eligible articles published between 2017 and 2022 in the Scopus, IEEE Xplore, Web of Science, and Google Scholar knowledge bases.
- Score: 43.55994393060723
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: One of the difficulties of artificial intelligence is to ensure that model
decisions are fair and free of bias. In research, datasets, metrics,
techniques, and tools are applied to detect and mitigate algorithmic unfairness
and bias. This study aims to examine existing knowledge on bias and unfairness
in Machine Learning models, identifying mitigation methods, fairness metrics,
and supporting tools. A Systematic Literature Review found 40 eligible articles
published between 2017 and 2022 in the Scopus, IEEE Xplore, Web of Science, and
Google Scholar knowledge bases. The results show numerous bias and unfairness
detection and mitigation approaches for ML technologies, with clearly defined
metrics in the literature, and varied metrics can be highlighted. We recommend
further research to define the techniques and metrics that should be employed
in each case to standardize and ensure the impartiality of the machine learning
model, thus, allowing the most appropriate metric to detect bias and unfairness
in a given context.
Related papers
- Whither Bias Goes, I Will Go: An Integrative, Systematic Review of Algorithmic Bias Mitigation [1.0470286407954037]
Concerns have been raised that machine learning (ML) models may be biased and perpetuate or exacerbate inequality.
We present a four-stage model of developing ML assessments and applying bias mitigation methods.
arXiv Detail & Related papers (2024-10-21T02:32:14Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Measuring, Interpreting, and Improving Fairness of Algorithms using
Causal Inference and Randomized Experiments [8.62694928567939]
We present an algorithm-agnostic framework (MIIF) to Measure, Interpret, and Improve the Fairness of an algorithmic decision.
We measure the algorithm bias using randomized experiments, which enables the simultaneous measurement of disparate treatment, disparate impact, and economic value.
We also develop an explainable machine learning model which accurately interprets and distills the beliefs of a blackbox algorithm.
arXiv Detail & Related papers (2023-09-04T19:45:18Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey [30.637712832450525]
We collect a total of 341 publications concerning bias mitigation for ML classifiers.
We investigate how existing bias mitigation methods are evaluated in the literature.
Based on the gathered insights, we hope to support practitioners in making informed choices when developing and evaluating new bias mitigation methods.
arXiv Detail & Related papers (2022-07-14T17:16:45Z) - Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms [0.0]
This study is the most comprehensive of its kind.
It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines.
We also found that fairness-aware algorithms can induce fairness without material drops in predictive power.
arXiv Detail & Related papers (2020-10-08T13:58:09Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Do the Machine Learning Models on a Crowd Sourced Platform Exhibit Bias?
An Empirical Study on Model Fairness [7.673007415383724]
We have created a benchmark of 40 top-rated models from Kaggle used for 5 different tasks.
We have applied 7 mitigation techniques on these models and analyzed the fairness, mitigation results, and impacts on performance.
arXiv Detail & Related papers (2020-05-21T23:35:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.