Towards an Enhanced Understanding of Bias in Pre-trained Neural Language
Models: A Survey with Special Emphasis on Affective Bias
- URL: http://arxiv.org/abs/2204.10365v1
- Date: Thu, 21 Apr 2022 18:51:19 GMT
- Title: Towards an Enhanced Understanding of Bias in Pre-trained Neural Language
Models: A Survey with Special Emphasis on Affective Bias
- Authors: Anoop K., Manjary P. Gangan, Deepak P., Lajish V. L
- Abstract summary: We present a survey to comprehend bias in large pre-trained language models, analyze the stages at which they occur, and various ways in which these biases could be quantified and mitigated.
Considering wide applicability of textual affective computing based downstream tasks in real-world systems such as business, healthcare, education, etc., we give a special emphasis on investigating bias in the context of affect (emotion) i.e., Affective Bias.
We present a summary of various bias evaluation corpora that help to aid future research and discuss challenges in the research on bias in pre-trained language models.
- Score: 2.6304695993930594
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The remarkable progress in Natural Language Processing (NLP) brought about by
deep learning, particularly with the recent advent of large pre-trained neural
language models, is brought into scrutiny as several studies began to discuss
and report potential biases in NLP applications. Bias in NLP is found to
originate from latent historical biases encoded by humans into textual data
which gets perpetuated or even amplified by NLP algorithm. We present a survey
to comprehend bias in large pre-trained language models, analyze the stages at
which they occur in these models, and various ways in which these biases could
be quantified and mitigated. Considering wide applicability of textual
affective computing based downstream tasks in real-world systems such as
business, healthcare, education, etc., we give a special emphasis on
investigating bias in the context of affect (emotion) i.e., Affective Bias, in
large pre-trained language models. We present a summary of various bias
evaluation corpora that help to aid future research and discuss challenges in
the research on bias in pre-trained language models. We believe that our
attempt to draw a comprehensive view of bias in pre-trained language models,
and especially the exploration of affective bias will be highly beneficial to
researchers interested in this evolving field.
Related papers
- Spoken Stereoset: On Evaluating Social Bias Toward Speaker in Speech Large Language Models [50.40276881893513]
This study introduces Spoken Stereoset, a dataset specifically designed to evaluate social biases in Speech Large Language Models (SLLMs)
By examining how different models respond to speech from diverse demographic groups, we aim to identify these biases.
The findings indicate that while most models show minimal bias, some still exhibit slightly stereotypical or anti-stereotypical tendencies.
arXiv Detail & Related papers (2024-08-14T16:55:06Z) - The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Pre-trained Language Models [78.69526166193236]
Pre-trained Language models (PLMs) have been acknowledged to contain harmful information, such as social biases.
We propose sc Social Bias Neurons to accurately pinpoint units (i.e., neurons) in a language model that can be attributed to undesirable behavior, such as social bias.
As measured by prior metrics from StereoSet, our model achieves a higher degree of fairness while maintaining language modeling ability with low cost.
arXiv Detail & Related papers (2024-06-14T15:41:06Z) - Mitigating Biases for Instruction-following Language Models via Bias Neurons Elimination [54.865941973768905]
We propose a novel and practical bias mitigation method, CRISPR, to eliminate bias neurons of language models in instruction-following settings.
CRISPR automatically determines biased outputs and categorizes neurons that affect the biased outputs as bias neurons using an explainability method.
Experimental results demonstrate the effectiveness of our method in mitigating biases under zero-shot instruction-following settings without losing the model's task performance and existing knowledge.
arXiv Detail & Related papers (2023-11-16T07:16:55Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Blacks is to Anger as Whites is to Joy? Understanding Latent Affective
Bias in Large Pre-trained Neural Language Models [3.5278693565908137]
"Affective Bias" is biased association of emotions towards a particular gender, race, and religion.
We show the existence of statistically significant affective bias in the PLM based emotion detection systems.
arXiv Detail & Related papers (2023-01-21T20:23:09Z) - An Analysis of Social Biases Present in BERT Variants Across Multiple
Languages [0.0]
We investigate the bias present in monolingual BERT models across a diverse set of languages.
We propose a template-based method to measure any kind of bias, based on sentence pseudo-likelihood.
We conclude that current methods of probing for bias are highly language-dependent.
arXiv Detail & Related papers (2022-11-25T23:38:08Z) - Bias at a Second Glance: A Deep Dive into Bias for German Educational
Peer-Review Data Modeling [10.080007569933331]
We analyze bias across text and through multiple architectures on a corpus of 9,165 German peer- reviews over five years.
Our collected corpus does not reveal many biases in the co-occurrence analysis or in the GloVe embeddings.
Pre-trained German language models find substantial conceptual, racial, and gender bias.
arXiv Detail & Related papers (2022-09-21T13:08:16Z) - A Survey on Bias and Fairness in Natural Language Processing [1.713291434132985]
We analyze the origins of biases, the definitions of fairness, and how different subfields of NLP bias can be mitigated.
We discuss how future studies can work towards eradicating pernicious biases from NLP algorithms.
arXiv Detail & Related papers (2022-03-06T18:12:30Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Impact of Gender Debiased Word Embeddings in Language Modeling [0.0]
Gender, race and social biases have been detected as evident examples of unfairness in applications of Natural Language Processing.
Recent studies have shown that the human-generated data used in training is an apparent factor of getting biases.
Current algorithms have also been proven to amplify biases from data.
arXiv Detail & Related papers (2021-05-03T14:45:10Z) - Towards Controllable Biases in Language Generation [87.89632038677912]
We develop a method to induce societal biases in generated text when input prompts contain mentions of specific demographic groups.
We analyze two scenarios: 1) inducing negative biases for one demographic and positive biases for another demographic, and 2) equalizing biases between demographics.
arXiv Detail & Related papers (2020-05-01T08:25:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.