Neural Machine Translation Doesn't Translate Gender Coreference Right
Unless You Make It
- URL: http://arxiv.org/abs/2010.05332v2
- Date: Thu, 10 Dec 2020 15:02:08 GMT
- Title: Neural Machine Translation Doesn't Translate Gender Coreference Right
Unless You Make It
- Authors: Danielle Saunders and Rosie Sallis and Bill Byrne
- Abstract summary: We propose schemes for incorporating explicit word-level gender inflection tags into Neural Machine Translation.
We find that simple existing approaches can over-generalize a gender-feature to multiple entities in a sentence.
We also propose an extension to assess translations of gender-neutral entities from English given a corresponding linguistic convention.
- Score: 18.148675498274866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Machine Translation (NMT) has been shown to struggle with grammatical
gender that is dependent on the gender of human referents, which can cause
gender bias effects. Many existing approaches to this problem seek to control
gender inflection in the target language by explicitly or implicitly adding a
gender feature to the source sentence, usually at the sentence level.
In this paper we propose schemes for incorporating explicit word-level gender
inflection tags into NMT. We explore the potential of this gender-inflection
controlled translation when the gender feature can be determined from a human
reference, or when a test sentence can be automatically gender-tagged,
assessing on English-to-Spanish and English-to-German translation.
We find that simple existing approaches can over-generalize a gender-feature
to multiple entities in a sentence, and suggest effective alternatives in the
form of tagged coreference adaptation data. We also propose an extension to
assess translations of gender-neutral entities from English given a
corresponding linguistic convention, such as a non-binary inflection, in the
target language.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - What an Elegant Bridge: Multilingual LLMs are Biased Similarly in Different Languages [51.0349882045866]
This paper investigates biases of Large Language Models (LLMs) through the lens of grammatical gender.
We prompt a model to describe nouns with adjectives in various languages, focusing specifically on languages with grammatical gender.
We find that a simple classifier can not only predict noun gender above chance but also exhibit cross-language transferability.
arXiv Detail & Related papers (2024-07-12T22:10:16Z) - Hi Guys or Hi Folks? Benchmarking Gender-Neutral Machine Translation
with the GeNTE Corpus [15.388894407006852]
Machine translation (MT) often defaults to masculine and stereotypical representations by making undue binary gender assumptions.
Our work addresses the rising demand for inclusive language by focusing head-on on gender-neutral translation from English to Italian.
arXiv Detail & Related papers (2023-10-08T21:44:00Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Gender Lost In Translation: How Bridging The Gap Between Languages
Affects Gender Bias in Zero-Shot Multilingual Translation [12.376309678270275]
bridging the gap between languages for which parallel data is not available affects gender bias in multilingual NMT.
We study the effect of encouraging language-agnostic hidden representations on models' ability to preserve gender.
We find that language-agnostic representations mitigate zero-shot models' masculine bias, and with increased levels of gender inflection in the bridge language, pivoting surpasses zero-shot translation regarding fairer gender preservation for speaker-related gender agreement.
arXiv Detail & Related papers (2023-05-26T13:51:50Z) - Analyzing Gender Representation in Multilingual Models [59.21915055702203]
We focus on the representation of gender distinctions as a practical case study.
We examine the extent to which the gender concept is encoded in shared subspaces across different languages.
arXiv Detail & Related papers (2022-04-20T00:13:01Z) - Generating Gender Augmented Data for NLP [3.5557219875516655]
Gender bias is a frequent occurrence in NLP-based applications, especially in gender-inflected languages.
This paper proposes an automatic and generalisable rewriting approach for short conversational sentences.
The proposed approach is based on a neural machine translation (NMT) system trained to 'translate' from one gender alternative to another.
arXiv Detail & Related papers (2021-07-13T11:13:21Z) - Improving Gender Translation Accuracy with Filtered Self-Training [14.938401898546548]
Machine translation systems often output incorrect gender, even when the gender is clear from context.
We propose a gender-filtered self-training technique to improve gender translation accuracy on unambiguously gendered inputs.
arXiv Detail & Related papers (2021-04-15T18:05:29Z) - They, Them, Theirs: Rewriting with Gender-Neutral English [56.14842450974887]
We perform a case study on the singular they, a common way to promote gender inclusion in English.
We show how a model can be trained to produce gender-neutral English with 1% word error rate with no human-labeled data.
arXiv Detail & Related papers (2021-02-12T21:47:48Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.