Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based
Bias in NLP
- URL: http://arxiv.org/abs/2103.00453v1
- Date: Sun, 28 Feb 2021 11:07:37 GMT
- Title: Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based
Bias in NLP
- Authors: Timo Schick, Sahana Udupa, Hinrich Sch\"utze
- Abstract summary: We propose a decoding algorithm that reduces the probability of a model producing problematic text.
While our approach does by no means eliminate the issue of language models generating biased text, we believe it to be an important step in this direction.
- Score: 10.936043362876651
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When trained on large, unfiltered crawls from the internet, language models
pick up and reproduce all kinds of undesirable biases that can be found in the
data: they often generate racist, sexist, violent or otherwise toxic language.
As large models often require millions of training examples to achieve good
performance, it is difficult to completely prevent them from being exposed to
such content. In this paper, we investigate whether pretrained language models
at least know when they exhibit some undesirable bias or produce toxic content.
Based on our findings, we propose a decoding algorithm that reduces the
probability of a model producing problematic text given only a textual
description of the undesired behavior. This algorithm does not rely on manually
curated word lists, nor does it require any training data or changes to the
model's parameters. While our approach does by no means eliminate the issue of
language models generating biased text, we believe it to be an important step
in this direction.
Related papers
- From Prejudice to Parity: A New Approach to Debiasing Large Language Model Word Embeddings [2.9324535682810886]
DeepSoftDebias is an algorithm that uses a neural network to perform'soft debiasing'
We exhaustively evaluate this algorithm across a variety of SOTA datasets, accuracy metrics, and challenging NLP tasks.
We find that DeepSoftDebias outperforms the current state-of-the-art methods at reducing bias across gender, race, and religion.
arXiv Detail & Related papers (2024-02-18T08:53:41Z) - Generating Enhanced Negatives for Training Language-Based Object Detectors [86.1914216335631]
We propose to leverage the vast knowledge built into modern generative models to automatically build negatives that are more relevant to the original data.
Specifically, we use large-language-models to generate negative text descriptions, and text-to-image diffusion models to also generate corresponding negative images.
Our experimental analysis confirms the relevance of the generated negative data, and its use in language-based detectors improves performance on two complex benchmarks.
arXiv Detail & Related papers (2023-12-29T23:04:00Z) - Fine-tuning Language Models for Factuality [96.5203774943198]
Large pre-trained language models (LLMs) have led to their widespread use, sometimes even as a replacement for traditional search engines.
Yet language models are prone to making convincing but factually inaccurate claims, often referred to as 'hallucinations'
In this work, we fine-tune language models to be more factual, without human labeling.
arXiv Detail & Related papers (2023-11-14T18:59:15Z) - MiLe Loss: a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models [40.992566245706996]
We propose a MiLe Loss function for mitigating the bias of learning difficulties with tokens.
We train generative language models at different scales of 468M, 1.2B, and 6.7B parameters.
Experiments reveal that models incorporating the proposed MiLe Loss can gain consistent performance improvement on downstream benchmarks.
arXiv Detail & Related papers (2023-10-30T13:33:21Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - Discovering Latent Knowledge in Language Models Without Supervision [72.95136739040676]
Existing techniques for training language models can be misaligned with the truth.
We propose directly finding latent knowledge inside the internal activations of a language model in a purely unsupervised way.
We show that despite using no supervision and no model outputs, our method can recover diverse knowledge represented in large language models.
arXiv Detail & Related papers (2022-12-07T18:17:56Z) - A Differentiable Language Model Adversarial Attack on Text Classifiers [10.658675415759697]
We propose a new black-box sentence-level attack for natural language processing.
Our method fine-tunes a pre-trained language model to generate adversarial examples.
We show that the proposed attack outperforms competitors on a diverse set of NLP problems for both computed metrics and human evaluation.
arXiv Detail & Related papers (2021-07-23T14:43:13Z) - Understanding by Understanding Not: Modeling Negation in Language Models [81.21351681735973]
Negation is a core construction in natural language.
We propose to augment the language modeling objective with an unlikelihood objective that is based on negated generic sentences.
We reduce the mean top1 error rate to 4% on the negated LAMA dataset.
arXiv Detail & Related papers (2021-05-07T21:58:35Z) - Detecting and Exorcising Statistical Demons from Language Models with
Anti-Models of Negative Data [13.392212395386933]
We find that within a model family, as the number of parameters, training epochs, and data set size increase, so does a model's ability to generalize to negative n-gram data.
We propose a form of inductive bias that attenuates such undesirable signals with negative data distributions automatically learned from positive data.
arXiv Detail & Related papers (2020-10-22T16:45:32Z) - Towards Robustifying NLI Models Against Lexical Dataset Biases [94.79704960296108]
This paper explores both data-level and model-level debiasing methods to robustify models against lexical dataset biases.
First, we debias the dataset through data augmentation and enhancement, but show that the model bias cannot be fully removed via this method.
The second approach employs a bag-of-words sub-model to capture the features that are likely to exploit the bias and prevents the original model from learning these biased features.
arXiv Detail & Related papers (2020-05-10T17:56:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.