Reducing Gender Bias in Neural Machine Translation as a Domain
Adaptation Problem
- URL: http://arxiv.org/abs/2004.04498v3
- Date: Thu, 9 Jul 2020 14:20:10 GMT
- Title: Reducing Gender Bias in Neural Machine Translation as a Domain
Adaptation Problem
- Authors: Danielle Saunders and Bill Byrne
- Abstract summary: Training data for NLP tasks often exhibits gender bias in that fewer sentences refer to women than to men.
Recent WinoMT challenge set allows us to measure this effect directly.
We use transfer learning on a small set of trusted, gender-balanced examples.
- Score: 21.44025591721678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training data for NLP tasks often exhibits gender bias in that fewer
sentences refer to women than to men. In Neural Machine Translation (NMT)
gender bias has been shown to reduce translation quality, particularly when the
target language has grammatical gender. The recent WinoMT challenge set allows
us to measure this effect directly (Stanovsky et al, 2019).
Ideally we would reduce system bias by simply debiasing all data prior to
training, but achieving this effectively is itself a challenge. Rather than
attempt to create a `balanced' dataset, we use transfer learning on a small set
of trusted, gender-balanced examples. This approach gives strong and consistent
improvements in gender debiasing with much less computational cost than
training from scratch.
A known pitfall of transfer learning on new domains is `catastrophic
forgetting', which we address both in adaptation and in inference. During
adaptation we show that Elastic Weight Consolidation allows a performance
trade-off between general translation quality and bias reduction. During
inference we propose a lattice-rescoring scheme which outperforms all systems
evaluated in Stanovsky et al (2019) on WinoMT with no degradation of general
test set BLEU, and we show this scheme can be applied to remove gender bias in
the output of `black box` online commercial MT systems. We demonstrate our
approach translating from English into three languages with varied linguistic
properties and data availability.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Reducing Gender Bias in Machine Translation through Counterfactual Data
Generation [0.0]
We show that gender bias can be significantly mitigated, albeit at the expense of translation quality due to catastrophic forgetting.
We also propose a novel domain-adaptation technique that leverages in-domain data created with the counterfactual data generation techniques.
The relevant code will be available at Github.
arXiv Detail & Related papers (2023-11-27T23:03:01Z) - A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for
Fairer Instruction-Tuned Machine Translation [35.44115368160656]
We investigate whether and to what extent machine translation models exhibit gender bias.
We find that IFT models default to male-inflected translations, even disregarding female occupational stereotypes.
We propose an easy-to-implement and effective bias mitigation solution.
arXiv Detail & Related papers (2023-10-18T17:36:55Z) - The Impact of Debiasing on the Performance of Language Models in
Downstream Tasks is Underestimated [70.23064111640132]
We compare the impact of debiasing on performance across multiple downstream tasks using a wide-range of benchmark datasets.
Experiments show that the effects of debiasing are consistently emphunderestimated across all tasks.
arXiv Detail & Related papers (2023-09-16T20:25:34Z) - Mitigating Gender Bias in Machine Translation through Adversarial
Learning [0.8883733362171032]
We present an adversarial learning framework that addresses challenges to mitigate gender bias in seq2seq machine translation.
Our framework improves the disparity in translation quality for sentences with male vs. female entities by 86% for English-German translation and 91% for English-French translation.
arXiv Detail & Related papers (2022-03-20T23:35:09Z) - Improving Gender Fairness of Pre-Trained Language Models without
Catastrophic Forgetting [88.83117372793737]
Forgetting information in the original training data may damage the model's downstream performance by a large margin.
We propose GEnder Equality Prompt (GEEP) to improve gender fairness of pre-trained models with less forgetting.
arXiv Detail & Related papers (2021-10-11T15:52:16Z) - Improving Multilingual Translation by Representation and Gradient
Regularization [82.42760103045083]
We propose a joint approach to regularize NMT models at both representation-level and gradient-level.
Our results demonstrate that our approach is highly effective in both reducing off-target translation occurrences and improving zero-shot translation performance.
arXiv Detail & Related papers (2021-09-10T10:52:21Z) - Improving Gender Translation Accuracy with Filtered Self-Training [14.938401898546548]
Machine translation systems often output incorrect gender, even when the gender is clear from context.
We propose a gender-filtered self-training technique to improve gender translation accuracy on unambiguously gendered inputs.
arXiv Detail & Related papers (2021-04-15T18:05:29Z) - First the worst: Finding better gender translations during beam search [19.921216907778447]
We focus on gender bias resulting from systematic errors in grammatical gender translation.
We experiment with reranking nbest lists using gender features obtained automatically from the source sentence.
We find that a combination of these techniques allows large gains in WinoMT accuracy without requiring additional bilingual data or an additional NMT model.
arXiv Detail & Related papers (2021-04-15T12:53:30Z) - Mitigating Gender Bias in Captioning Systems [56.25457065032423]
Most captioning models learn gender bias, leading to high gender prediction errors, especially for women.
We propose a new Guided Attention Image Captioning model (GAIC) which provides self-guidance on visual attention to encourage the model to capture correct gender visual evidence.
arXiv Detail & Related papers (2020-06-15T12:16:19Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.