Sociodemographic Bias in Language Models: A Survey and Forward Path
- URL: http://arxiv.org/abs/2306.08158v4
- Date: Fri, 1 Mar 2024 17:40:27 GMT
- Title: Sociodemographic Bias in Language Models: A Survey and Forward Path
- Authors: Vipul Gupta, Pranav Narayanan Venkit, Shomir Wilson, Rebecca J.
Passonneau
- Abstract summary: We systematically organize the existing literature into three main areas: types of bias, quantifying bias, and debiasing techniques.
We identify current trends, limitations, and potential future directions in bias research.
We recommend using interdisciplinary approaches to combine works on LM bias with an understanding of the potential harms.
- Score: 8.01539480296785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a comprehensive survey of work on sociodemographic bias
in language models (LMs). Sociodemographic biases embedded within language
models can have harmful effects when deployed in real-world settings. We
systematically organize the existing literature into three main areas: types of
bias, quantifying bias, and debiasing techniques. We also track the evolution
of investigations of LM bias over the past decade. We identify current trends,
limitations, and potential future directions in bias research. To guide future
research towards more effective and reliable solutions, we present a checklist
of open questions. We also recommend using interdisciplinary approaches to
combine works on LM bias with an understanding of the potential harms.
Related papers
- Investigating Bias in LLM-Based Bias Detection: Disparities between LLMs and Human Perception [13.592532358127293]
We investigate the presence and nature of bias within Large Language Models (LLMs)
We probe whether LLMs exhibit biases, particularly in political bias prediction and text continuation tasks.
We propose debiasing strategies, including prompt engineering and model fine-tuning.
arXiv Detail & Related papers (2024-03-22T00:59:48Z) - GPTBIAS: A Comprehensive Framework for Evaluating Bias in Large Language
Models [83.30078426829627]
Large language models (LLMs) have gained popularity and are being widely adopted by a large user community.
The existing evaluation methods have many constraints, and their results exhibit a limited degree of interpretability.
We propose a bias evaluation framework named GPTBIAS that leverages the high performance of LLMs to assess bias in models.
arXiv Detail & Related papers (2023-12-11T12:02:14Z) - Diagnosing and Debiasing Corpus-Based Political Bias and Insults in GPT2 [0.0]
Training large language models (LLMs) on extensive, unfiltered corpora sourced from the internet is a common and advantageous practice.
Recent research shows that generative pretrained transformer (GPT) language models can recognize their own biases and detect toxicity in generated content.
This study investigates the efficacy of the diagnosing-debiasing approach in mitigating two additional types of biases: insults and political bias.
arXiv Detail & Related papers (2023-11-17T01:20:08Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - A survey on bias in machine learning research [0.0]
Current research on bias in machine learning often focuses on fairness, while overlooking the roots or causes of bias.
This article aims to bridge the gap between past literature on bias in research by providing taxonomy for potential sources of bias and errors in data and models.
arXiv Detail & Related papers (2023-08-22T07:56:57Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Testing Occupational Gender Bias in Language Models: Towards Robust Measurement and Zero-Shot Debiasing [98.07536837448293]
Large language models (LLMs) have been shown to exhibit a variety of harmful, human-like biases against various demographics.
We introduce a list of desiderata for robustly measuring biases in generative language models.
We then use this benchmark to test several state-of-the-art open-source LLMs, including Llama, Mistral, and their instruction-tuned versions.
arXiv Detail & Related papers (2022-12-20T22:41:24Z) - Towards an Enhanced Understanding of Bias in Pre-trained Neural Language
Models: A Survey with Special Emphasis on Affective Bias [2.6304695993930594]
We present a survey to comprehend bias in large pre-trained language models, analyze the stages at which they occur, and various ways in which these biases could be quantified and mitigated.
Considering wide applicability of textual affective computing based downstream tasks in real-world systems such as business, healthcare, education, etc., we give a special emphasis on investigating bias in the context of affect (emotion) i.e., Affective Bias.
We present a summary of various bias evaluation corpora that help to aid future research and discuss challenges in the research on bias in pre-trained language models.
arXiv Detail & Related papers (2022-04-21T18:51:19Z) - Towards Understanding and Mitigating Social Biases in Language Models [107.82654101403264]
Large-scale pretrained language models (LMs) can be potentially dangerous in manifesting undesirable representational biases.
We propose steps towards mitigating social biases during text generation.
Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information.
arXiv Detail & Related papers (2021-06-24T17:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.