Fairness in Machine Learning: A Survey
- URL: http://arxiv.org/abs/2010.04053v1
- Date: Sun, 4 Oct 2020 21:01:34 GMT
- Title: Fairness in Machine Learning: A Survey
- Authors: Simon Caton and Christian Haas
- Abstract summary: There is significant literature on approaches to mitigate bias and promote fairness.
This article seeks to provide an overview of the different schools of thought and approaches to mitigating (social) biases and increase fairness in the Machine Learning literature.
It organises approaches into the widely accepted framework of pre-processing, in-processing, and post-processing methods, subcategorizing into a further 11 method areas.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Machine Learning technologies become increasingly used in contexts that
affect citizens, companies as well as researchers need to be confident that
their application of these methods will not have unexpected social
implications, such as bias towards gender, ethnicity, and/or people with
disabilities. There is significant literature on approaches to mitigate bias
and promote fairness, yet the area is complex and hard to penetrate for
newcomers to the domain. This article seeks to provide an overview of the
different schools of thought and approaches to mitigating (social) biases and
increase fairness in the Machine Learning literature. It organises approaches
into the widely accepted framework of pre-processing, in-processing, and
post-processing methods, subcategorizing into a further 11 method areas.
Although much of the literature emphasizes binary classification, a discussion
of fairness in regression, recommender systems, unsupervised learning, and
natural language processing is also provided along with a selection of
currently available open source libraries. The article concludes by summarising
open challenges articulated as four dilemmas for fairness research.
Related papers
- Advancing Fairness in Natural Language Processing: From Traditional Methods to Explainability [0.9065034043031668]
The thesis addresses the need for equity and transparency in NLP systems.
It introduces an innovative algorithm to mitigate biases in high-risk NLP applications.
It also presents a model-agnostic explainability method that identifies and ranks concepts in Transformer models.
arXiv Detail & Related papers (2024-10-16T12:38:58Z) - A Catalog of Fairness-Aware Practices in Machine Learning Engineering [13.012624574172863]
Machine learning's widespread adoption in decision-making processes raises concerns about fairness.
There remains a gap in understanding and categorizing practices for engineering fairness throughout the machine learning lifecycle.
This paper presents a novel catalog of practices for addressing fairness in machine learning derived from a systematic mapping study.
arXiv Detail & Related papers (2024-08-29T16:28:43Z) - Fairness and Bias Mitigation in Computer Vision: A Survey [61.01658257223365]
Computer vision systems are increasingly being deployed in high-stakes real-world applications.
There is a dire need to ensure that they do not propagate or amplify any discriminatory tendencies in historical or human-curated data.
This paper presents a comprehensive survey on fairness that summarizes and sheds light on ongoing trends and successes in the context of computer vision.
arXiv Detail & Related papers (2024-08-05T13:44:22Z) - Combatting Human Trafficking in the Cyberspace: A Natural Language
Processing-Based Methodology to Analyze the Language in Online Advertisements [55.2480439325792]
This project tackles the pressing issue of human trafficking in online C2C marketplaces through advanced Natural Language Processing (NLP) techniques.
We introduce a novel methodology for generating pseudo-labeled datasets with minimal supervision, serving as a rich resource for training state-of-the-art NLP models.
A key contribution is the implementation of an interpretability framework using Integrated Gradients, providing explainable insights crucial for law enforcement.
arXiv Detail & Related papers (2023-11-22T02:45:01Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Bias and unfairness in machine learning models: a systematic literature
review [43.55994393060723]
This study aims to examine existing knowledge on bias and unfairness in Machine Learning models.
A Systematic Literature Review found 40 eligible articles published between 2017 and 2022 in the Scopus, IEEE Xplore, Web of Science, and Google Scholar knowledge bases.
arXiv Detail & Related papers (2022-02-16T16:27:00Z) - A Framework for Fairness: A Systematic Review of Existing Fair AI
Solutions [4.594159253008448]
A large portion of fairness research has gone to producing tools that machine learning practitioners can use to audit for bias while designing their algorithms.
There is a lack of application of these fairness solutions in practice.
This review provides an in-depth summary of the algorithmic bias issues that have been defined and the fairness solution space that has been proposed.
arXiv Detail & Related papers (2021-12-10T17:51:20Z) - Towards Understanding and Mitigating Social Biases in Language Models [107.82654101403264]
Large-scale pretrained language models (LMs) can be potentially dangerous in manifesting undesirable representational biases.
We propose steps towards mitigating social biases during text generation.
Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information.
arXiv Detail & Related papers (2021-06-24T17:52:43Z) - Individual Explanations in Machine Learning Models: A Case Study on
Poverty Estimation [63.18666008322476]
Machine learning methods are being increasingly applied in sensitive societal contexts.
The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods.
And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain.
arXiv Detail & Related papers (2021-04-09T01:54:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.