Translate With Care: Addressing Gender Bias, Neutrality, and Reasoning in Large Language Model Translations
- URL: http://arxiv.org/abs/2506.00748v1
- Date: Sat, 31 May 2025 23:27:07 GMT
- Title: Translate With Care: Addressing Gender Bias, Neutrality, and Reasoning in Large Language Model Translations
- Authors: Pardis Sadat Zahraei, Ali Emami,
- Abstract summary: We introduce the Translate-with-Care dataset, comprising 3,950 challenging scenarios across six low- to mid-resource languages.<n>Our analysis reveals a universal struggle in translating genderless content, resulting in gender stereotyping and reasoning errors.<n>Google Translate and GPT-4 showed particularly strong bias, favoring male pronouns 4-6 times more than feminine ones.
- Score: 6.066322919105025
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Addressing gender bias and maintaining logical coherence in machine translation remains challenging, particularly when translating between natural gender languages, like English, and genderless languages, such as Persian, Indonesian, and Finnish. We introduce the Translate-with-Care (TWC) dataset, comprising 3,950 challenging scenarios across six low- to mid-resource languages, to assess translation systems' performance. Our analysis of diverse technologies, including GPT-4, mBART-50, NLLB-200, and Google Translate, reveals a universal struggle in translating genderless content, resulting in gender stereotyping and reasoning errors. All models preferred masculine pronouns when gender stereotypes could influence choices. Google Translate and GPT-4 showed particularly strong bias, favoring male pronouns 4-6 times more than feminine ones in leadership and professional success contexts. Fine-tuning mBART-50 on TWC substantially resolved these biases and errors, led to strong generalization, and surpassed proprietary LLMs while remaining open-source. This work emphasizes the need for targeted approaches to gender and semantic coherence in machine translation, particularly for genderless languages, contributing to more equitable and accurate translation systems.
Related papers
- Gender Bias in English-to-Greek Machine Translation [0.0]
We find persistent gender bias in translations by both Google Translate and DeepL.<n>GPT-4o shows promise, generating appropriate gendered and neutral alternatives for most ambiguous cases.
arXiv Detail & Related papers (2025-06-11T09:44:12Z) - EuroGEST: Investigating gender stereotypes in multilingual language models [53.88459905621724]
Large language models increasingly support multiple languages, yet most benchmarks for gender bias remain English-centric.<n>We introduce EuroGEST, a dataset designed to measure gender-stereotypical reasoning in LLMs across English and 29 European languages.
arXiv Detail & Related papers (2025-06-04T11:58:18Z) - FairTranslate: An English-French Dataset for Gender Bias Evaluation in Machine Translation by Overcoming Gender Binarity [0.6827423171182154]
Large Language Models (LLMs) are increasingly leveraged for translation tasks but often fall short when translating inclusive language.<n>This paper presents a novel, fully human-annotated dataset designed to evaluate non-binary gender biases in machine translation systems from English to French.
arXiv Detail & Related papers (2025-04-22T14:35:16Z) - GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models [73.23743278545321]
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but have also been observed to magnify societal biases.<n>GenderCARE is a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics.
arXiv Detail & Related papers (2024-08-22T15:35:46Z) - Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - GATE X-E : A Challenge Set for Gender-Fair Translations from
Weakly-Gendered Languages [0.0]
We introduce GATE X-E, an extension to the GATE corpus, that consists of human translations from Turkish, Hungarian, Finnish, and Persian into English.
The dataset features natural sentences with a wide range of sentence lengths and domains, challenging translation rewriters on various linguistic phenomena.
We present a translation gender rewriting solution built with GPT-4 and use GATE X-E to evaluate it.
arXiv Detail & Related papers (2024-02-22T04:36:14Z) - Evaluating Gender Bias in the Translation of Gender-Neutral Languages
into English [0.0]
We introduce GATE X-E, an extension to the GATE corpus, that consists of human translations from Turkish, Hungarian, Finnish, and Persian into English.
The dataset features natural sentences with a wide range of sentence lengths and domains, challenging translation rewriters on various linguistic phenomena.
We present an English gender rewriting solution built on GPT-3.5 Turbo and use GATE X-E to evaluate it.
arXiv Detail & Related papers (2023-11-15T10:25:14Z) - The Gender-GAP Pipeline: A Gender-Aware Polyglot Pipeline for Gender
Characterisation in 55 Languages [51.2321117760104]
This paper describes the Gender-GAP Pipeline, an automatic pipeline to characterize gender representation in large-scale datasets for 55 languages.
The pipeline uses a multilingual lexicon of gendered person-nouns to quantify the gender representation in text.
We showcase it to report gender representation in WMT training data and development data for the News task, confirming that current data is skewed towards masculine representation.
arXiv Detail & Related papers (2023-08-31T17:20:50Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Mitigating Gender Bias in Machine Translation through Adversarial
Learning [0.8883733362171032]
We present an adversarial learning framework that addresses challenges to mitigate gender bias in seq2seq machine translation.
Our framework improves the disparity in translation quality for sentences with male vs. female entities by 86% for English-German translation and 91% for English-French translation.
arXiv Detail & Related papers (2022-03-20T23:35:09Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.