Exploring the Impact of Training Data Distribution and Subword
Tokenization on Gender Bias in Machine Translation
- URL: http://arxiv.org/abs/2309.12491v2
- Date: Sat, 30 Sep 2023 19:00:32 GMT
- Title: Exploring the Impact of Training Data Distribution and Subword
Tokenization on Gender Bias in Machine Translation
- Authors: Bar Iluz, Tomasz Limisiewicz, Gabriel Stanovsky, David Mare\v{c}ek
- Abstract summary: We study the effect of tokenization on gender bias in machine translation.
We observe that female and non-stereotypical gender inflections of profession names tend to be split into subword tokens.
We show that analyzing subword splits provides good estimates of gender-form imbalance in the training data.
- Score: 19.719314005149883
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the effect of tokenization on gender bias in machine translation, an
aspect that has been largely overlooked in previous works. Specifically, we
focus on the interactions between the frequency of gendered profession names in
training data, their representation in the subword tokenizer's vocabulary, and
gender bias. We observe that female and non-stereotypical gender inflections of
profession names (e.g., Spanish "doctora" for "female doctor") tend to be split
into multiple subword tokens. Our results indicate that the imbalance of gender
forms in the model's training corpus is a major factor contributing to gender
bias and has a greater impact than subword splitting. We show that analyzing
subword splits provides good estimates of gender-form imbalance in the training
data and can be used even when the corpus is not publicly available. We also
demonstrate that fine-tuning just the token embedding layer can decrease the
gap in gender prediction accuracy between female and male forms without
impairing the translation quality.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Gender Inflected or Bias Inflicted: On Using Grammatical Gender Cues for
Bias Evaluation in Machine Translation [0.0]
We use Hindi as the source language and construct two sets of gender-specific sentences to evaluate different Hindi-English (HI-EN) NMT systems.
Our work highlights the importance of considering the nature of language when designing such extrinsic bias evaluation datasets.
arXiv Detail & Related papers (2023-11-07T07:09:59Z) - How To Build Competitive Multi-gender Speech Translation Models For
Controlling Speaker Gender Translation [21.125217707038356]
When translating from notional gender languages into grammatical gender languages, the generated translation requires explicit gender assignments for various words, including those referring to the speaker.
To avoid such biased and not inclusive behaviors, the gender assignment of speaker-related expressions should be guided by externally-provided metadata about the speaker's gender.
This paper aims to achieve the same results by integrating the speaker's gender metadata into a single "multi-gender" neural ST model, easier to maintain.
arXiv Detail & Related papers (2023-10-23T17:21:32Z) - Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts [87.62403265382734]
Recent studies show that traditional fairytales are rife with harmful gender biases.
This work aims to assess learned biases of language models by evaluating their robustness against gender perturbations.
arXiv Detail & Related papers (2023-10-16T22:25:09Z) - The Impact of Debiasing on the Performance of Language Models in
Downstream Tasks is Underestimated [70.23064111640132]
We compare the impact of debiasing on performance across multiple downstream tasks using a wide-range of benchmark datasets.
Experiments show that the effects of debiasing are consistently emphunderestimated across all tasks.
arXiv Detail & Related papers (2023-09-16T20:25:34Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Gender, names and other mysteries: Towards the ambiguous for
gender-inclusive translation [7.322734499960981]
This paper explores the case where the source sentence lacks explicit gender markers, but the target sentence contains them due to richer grammatical gender.
We find that many name-gender co-occurrences in MT data are not resolvable with 'unambiguous gender' in the source language.
We discuss potential steps toward gender-inclusive translation which accepts the ambiguity in both gender and translation.
arXiv Detail & Related papers (2023-06-07T16:21:59Z) - How to Split: the Effect of Word Segmentation on Gender Bias in Speech
Translation [14.955696163410254]
We bring the analysis on gender bias in automatic translation onto a seemingly neutral yet critical component: word segmentation.
Our results on two language pairs (English-Italian/French) show that state-of-the-art sub-word splitting (BPE) comes at the cost of higher gender bias.
In light of this finding, we propose a combined approach that preserves BPE overall translation quality, while leveraging the higher ability of character-based segmentation to properly translate gender.
arXiv Detail & Related papers (2021-05-28T12:38:21Z) - Mitigating Gender Bias in Captioning Systems [56.25457065032423]
Most captioning models learn gender bias, leading to high gender prediction errors, especially for women.
We propose a new Guided Attention Image Captioning model (GAIC) which provides self-guidance on visual attention to encourage the model to capture correct gender visual evidence.
arXiv Detail & Related papers (2020-06-15T12:16:19Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.