Hi Guys or Hi Folks? Benchmarking Gender-Neutral Machine Translation
with the GeNTE Corpus
- URL: http://arxiv.org/abs/2310.05294v1
- Date: Sun, 8 Oct 2023 21:44:00 GMT
- Title: Hi Guys or Hi Folks? Benchmarking Gender-Neutral Machine Translation
with the GeNTE Corpus
- Authors: Andrea Piergentili, Beatrice Savoldi, Dennis Fucci, Matteo Negri,
Luisa Bentivogli
- Abstract summary: Machine translation (MT) often defaults to masculine and stereotypical representations by making undue binary gender assumptions.
Our work addresses the rising demand for inclusive language by focusing head-on on gender-neutral translation from English to Italian.
- Score: 15.388894407006852
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gender inequality is embedded in our communication practices and perpetuated
in translation technologies. This becomes particularly apparent when
translating into grammatical gender languages, where machine translation (MT)
often defaults to masculine and stereotypical representations by making undue
binary gender assumptions. Our work addresses the rising demand for inclusive
language by focusing head-on on gender-neutral translation from English to
Italian. We start from the essentials: proposing a dedicated benchmark and
exploring automated evaluation methods. First, we introduce GeNTE, a natural,
bilingual test set for gender-neutral translation, whose creation was informed
by a survey on the perception and use of neutral language. Based on GeNTE, we
then overview existing reference-based evaluation approaches, highlight their
limits, and propose a reference-free method more suitable to assess
gender-neutral translation.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - Evaluating Gender Bias in the Translation of Gender-Neutral Languages
into English [0.0]
We introduce GATE X-E, an extension to the GATE corpus, that consists of human translations from Turkish, Hungarian, Finnish, and Persian into English.
The dataset features natural sentences with a wide range of sentence lengths and domains, challenging translation rewriters on various linguistic phenomena.
We present an English gender rewriting solution built on GPT-3.5 Turbo and use GATE X-E to evaluate it.
arXiv Detail & Related papers (2023-11-15T10:25:14Z) - Don't Overlook the Grammatical Gender: Bias Evaluation for Hindi-English
Machine Translation [0.0]
Existing evaluation benchmarks primarily focus on English as the source language of translation.
For source languages other than English, studies often employ gender-neutral sentences for bias evaluation.
We emphasise the significance of tailoring bias evaluation test sets to account for grammatical gender markers in the source language.
arXiv Detail & Related papers (2023-11-11T09:28:43Z) - Gender Inflected or Bias Inflicted: On Using Grammatical Gender Cues for
Bias Evaluation in Machine Translation [0.0]
We use Hindi as the source language and construct two sets of gender-specific sentences to evaluate different Hindi-English (HI-EN) NMT systems.
Our work highlights the importance of considering the nature of language when designing such extrinsic bias evaluation datasets.
arXiv Detail & Related papers (2023-11-07T07:09:59Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - "I'm fully who I am": Towards Centering Transgender and Non-Binary
Voices to Measure Biases in Open Language Generation [69.25368160338043]
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
We assess how the social reality surrounding experienced marginalization of TGNB persons contributes to and persists within Open Language Generation.
We introduce TANGO, a dataset of template-based real-world text curated from a TGNB-oriented community.
arXiv Detail & Related papers (2023-05-17T04:21:45Z) - They, Them, Theirs: Rewriting with Gender-Neutral English [56.14842450974887]
We perform a case study on the singular they, a common way to promote gender inclusion in English.
We show how a model can be trained to produce gender-neutral English with 1% word error rate with no human-labeled data.
arXiv Detail & Related papers (2021-02-12T21:47:48Z) - Neural Machine Translation Doesn't Translate Gender Coreference Right
Unless You Make It [18.148675498274866]
We propose schemes for incorporating explicit word-level gender inflection tags into Neural Machine Translation.
We find that simple existing approaches can over-generalize a gender-feature to multiple entities in a sentence.
We also propose an extension to assess translations of gender-neutral entities from English given a corresponding linguistic convention.
arXiv Detail & Related papers (2020-10-11T20:05:42Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.