Gender Biases and Where to Find Them: Exploring Gender Bias in
Pre-Trained Transformer-based Language Models Using Movement Pruning
- URL: http://arxiv.org/abs/2207.02463v1
- Date: Wed, 6 Jul 2022 06:20:35 GMT
- Title: Gender Biases and Where to Find Them: Exploring Gender Bias in
Pre-Trained Transformer-based Language Models Using Movement Pruning
- Authors: Przemyslaw Joniak and Akiko Aizawa
- Abstract summary: We show a novel framework for inspecting bias in transformer-based language models via movement pruning.
We implement our framework by pruning the model while fine-tuning it on the debiasing objective.
We re-discover a bias-performance trade-off: the better the model performs, the more bias it contains.
- Score: 32.62430731115707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Language model debiasing has emerged as an important field of study in the
NLP community. Numerous debiasing techniques were proposed, but bias ablation
remains an unaddressed issue. We demonstrate a novel framework for inspecting
bias in pre-trained transformer-based language models via movement pruning.
Given a model and a debiasing objective, our framework finds a subset of the
model containing less bias than the original model. We implement our framework
by pruning the model while fine-tuning it on the debiasing objective. Optimized
are only the pruning scores - parameters coupled with the model's weights that
act as gates. We experiment with pruning attention heads, an important building
block of transformers: we prune square blocks, as well as establish a new way
of pruning the entire heads. Lastly, we demonstrate the usage of our framework
using gender bias, and based on our findings, we propose an improvement to an
existing debiasing method. Additionally, we re-discover a bias-performance
trade-off: the better the model performs, the more bias it contains.
Related papers
- Projective Methods for Mitigating Gender Bias in Pre-trained Language Models [10.418595661963062]
Projective methods are fast to implement, use a small number of saved parameters, and make no updates to the existing model parameters.
We find that projective methods can be effective at both intrinsic bias and downstream bias mitigation, but that the two outcomes are not necessarily correlated.
arXiv Detail & Related papers (2024-03-27T17:49:31Z) - Improving Bias Mitigation through Bias Experts in Natural Language
Understanding [10.363406065066538]
We propose a new debiasing framework that introduces binary classifiers between the auxiliary model and the main model.
Our proposed strategy improves the bias identification ability of the auxiliary model.
arXiv Detail & Related papers (2023-12-06T16:15:00Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Language Models Get a Gender Makeover: Mitigating Gender Bias with
Few-Shot Data Interventions [50.67412723291881]
Societal biases present in pre-trained large language models are a critical issue.
We propose data intervention strategies as a powerful yet simple technique to reduce gender bias in pre-trained models.
arXiv Detail & Related papers (2023-06-07T16:50:03Z) - Echoes: Unsupervised Debiasing via Pseudo-bias Labeling in an Echo
Chamber [17.034228910493056]
This paper presents experimental analyses revealing that the existing biased models overfit to bias-conflicting samples in the training data.
We propose a straightforward and effective method called Echoes, which trains a biased model and a target model with a different strategy.
Our approach achieves superior debiasing results compared to the existing baselines on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-06T13:13:18Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases [62.54519787811138]
We present a simple but effective method to measure and mitigate model biases caused by reliance on spurious cues.
We rank images within their classes based on spuriosity, proxied via deep neural features of an interpretable network.
Our results suggest that model bias due to spurious feature reliance is influenced far more by what the model is trained on than how it is trained.
arXiv Detail & Related papers (2022-12-05T23:15:43Z) - Does Debiasing Inevitably Degrade the Model Performance [8.20550078248207]
We propose a theoretical framework explaining the three candidate mechanisms of the language model's gender bias.
We also discover a pathway through which debiasing will not degrade the model performance.
arXiv Detail & Related papers (2022-11-14T13:46:13Z) - A Generative Approach for Mitigating Structural Biases in Natural
Language Inference [24.44419010439227]
In this work, we reformulate the NLI task as a generative task, where a model is conditioned on the biased subset of the input and the label.
We show that this approach is highly robust to large amounts of bias.
We find that generative models are difficult to train and they generally perform worse than discriminative baselines.
arXiv Detail & Related papers (2021-08-31T17:59:45Z) - Learning from others' mistakes: Avoiding dataset biases without modeling
them [111.17078939377313]
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended task.
Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available.
We show a method for training models that learn to ignore these problematic correlations.
arXiv Detail & Related papers (2020-12-02T16:10:54Z) - Towards Robustifying NLI Models Against Lexical Dataset Biases [94.79704960296108]
This paper explores both data-level and model-level debiasing methods to robustify models against lexical dataset biases.
First, we debias the dataset through data augmentation and enhancement, but show that the model bias cannot be fully removed via this method.
The second approach employs a bag-of-words sub-model to capture the features that are likely to exploit the bias and prevents the original model from learning these biased features.
arXiv Detail & Related papers (2020-05-10T17:56:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.