Mitigating Biases in Toxic Language Detection through Invariant
Rationalization
- URL: http://arxiv.org/abs/2106.07240v1
- Date: Mon, 14 Jun 2021 08:49:52 GMT
- Title: Mitigating Biases in Toxic Language Detection through Invariant
Rationalization
- Authors: Yung-Sung Chuang, Mingye Gao, Hongyin Luo, James Glass, Hung-yi Lee,
Yun-Nung Chen, Shang-Wen Li
- Abstract summary: biases toward some attributes, including gender, race, and dialect, exist in most training datasets for toxicity detection.
We propose to use invariant rationalization (InvRat), a game-theoretic framework consisting of a rationale generator and a predictor, to rule out the spurious correlation of certain syntactic patterns.
Our method yields lower false positive rate in both lexical and dialectal attributes than previous debiasing methods.
- Score: 70.36701068616367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic detection of toxic language plays an essential role in protecting
social media users, especially minority groups, from verbal abuse. However,
biases toward some attributes, including gender, race, and dialect, exist in
most training datasets for toxicity detection. The biases make the learned
models unfair and can even exacerbate the marginalization of people.
Considering that current debiasing methods for general natural language
understanding tasks cannot effectively mitigate the biases in the toxicity
detectors, we propose to use invariant rationalization (InvRat), a
game-theoretic framework consisting of a rationale generator and a predictor,
to rule out the spurious correlation of certain syntactic patterns (e.g.,
identity mentions, dialect) to toxicity labels. We empirically show that our
method yields lower false positive rate in both lexical and dialectal
attributes than previous debiasing methods.
Related papers
- On the Role of Speech Data in Reducing Toxicity Detection Bias [22.44133159647888]
We produce a set of high-quality group annotations for the multilingual MuTox dataset.
We then leverage these annotations to systematically compare speech- and text-based toxicity classifiers.
Our findings indicate that access to speech data during inference supports reduced bias against group mentions.
arXiv Detail & Related papers (2024-11-12T19:26:43Z) - The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Pre-trained Language Models [78.69526166193236]
Pre-trained Language models (PLMs) have been acknowledged to contain harmful information, such as social biases.
We propose sc Social Bias Neurons to accurately pinpoint units (i.e., neurons) in a language model that can be attributed to undesirable behavior, such as social bias.
As measured by prior metrics from StereoSet, our model achieves a higher degree of fairness while maintaining language modeling ability with low cost.
arXiv Detail & Related papers (2024-06-14T15:41:06Z) - Mitigating Biases for Instruction-following Language Models via Bias Neurons Elimination [54.865941973768905]
We propose a novel and practical bias mitigation method, CRISPR, to eliminate bias neurons of language models in instruction-following settings.
CRISPR automatically determines biased outputs and categorizes neurons that affect the biased outputs as bias neurons using an explainability method.
Experimental results demonstrate the effectiveness of our method in mitigating biases under zero-shot instruction-following settings without losing the model's task performance and existing knowledge.
arXiv Detail & Related papers (2023-11-16T07:16:55Z) - On Bias and Fairness in NLP: Investigating the Impact of Bias and Debiasing in Language Models on the Fairness of Toxicity Detection [7.297345802761503]
representation bias, selection bias and overamplification bias are investigated.
We show that overamplification bias is the most impactful type of bias on the fairness of the task of toxicity detection.
We introduce a list of guidelines to ensure the fairness of the task of toxicity detection.
arXiv Detail & Related papers (2023-05-22T08:44:00Z) - Toxicity Detection with Generative Prompt-based Inference [3.9741109244650823]
It is a long-known risk that language models (LMs), once trained on corpus containing undesirable content, have the power to manifest biases and toxicity.
In this work, we explore the generative variant of zero-shot prompt-based toxicity detection with comprehensive trials on prompt engineering.
arXiv Detail & Related papers (2022-05-24T22:44:43Z) - Detoxifying Language Models with a Toxic Corpus [16.7345472998388]
We propose to use toxic corpus as an additional resource to reduce the toxicity.
Our result shows that toxic corpus can indeed help to reduce the toxicity of the language generation process substantially.
arXiv Detail & Related papers (2022-04-30T18:25:18Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Challenges in Automated Debiasing for Toxic Language Detection [81.04406231100323]
Biased associations have been a challenge in the development of classifiers for detecting toxic language.
We investigate recently introduced debiasing methods for text classification datasets and models, as applied to toxic language detection.
Our focus is on lexical (e.g., swear words, slurs, identity mentions) and dialectal markers (specifically African American English)
arXiv Detail & Related papers (2021-01-29T22:03:17Z) - RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language
Models [93.151822563361]
Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment.
We investigate the extent to which pretrained LMs can be prompted to generate toxic language, and the effectiveness of controllable text generation algorithms at preventing such toxic degeneration.
arXiv Detail & Related papers (2020-09-24T03:17:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.