Efficient Gender Debiasing of Pre-trained Indic Language Models
- URL: http://arxiv.org/abs/2209.03661v1
- Date: Thu, 8 Sep 2022 09:15:58 GMT
- Title: Efficient Gender Debiasing of Pre-trained Indic Language Models
- Authors: Neeraja Kirtane, V Manushree, Aditya Kane
- Abstract summary: The gender bias present in the data on which language models are pre-trained gets reflected in the systems that use these models.
In our paper, we measure gender bias associated with occupations in Hindi language models.
Our results reflect that the bias is reduced post-introduction of our proposed mitigation techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The gender bias present in the data on which language models are pre-trained
gets reflected in the systems that use these models. The model's intrinsic
gender bias shows an outdated and unequal view of women in our culture and
encourages discrimination. Therefore, in order to establish more equitable
systems and increase fairness, it is crucial to identify and mitigate the bias
existing in these models. While there is a significant amount of work in this
area in English, there is a dearth of research being done in other gendered and
low resources languages, particularly the Indian languages. English is a
non-gendered language, where it has genderless nouns. The methodologies for
bias detection in English cannot be directly deployed in other gendered
languages, where the syntax and semantics vary. In our paper, we measure gender
bias associated with occupations in Hindi language models. Our major
contributions in this paper are the construction of a novel corpus to evaluate
occupational gender bias in Hindi, quantify this existing bias in these systems
using a well-defined metric, and mitigate it by efficiently fine-tuning our
model. Our results reflect that the bias is reduced post-introduction of our
proposed mitigation techniques. Our codebase is available publicly.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Investigating Gender Bias in Turkish Language Models [3.100560442806189]
We investigate the significance of gender bias in Turkish language models.
We build upon existing bias evaluation frameworks and extend them to the Turkish language.
Specifically, we evaluate Turkish language models for their embedded ethnic bias toward Kurdish people.
arXiv Detail & Related papers (2024-04-17T20:24:41Z) - Multilingual Text-to-Image Generation Magnifies Gender Stereotypes and Prompt Engineering May Not Help You [64.74707085021858]
We show that multilingual models suffer from significant gender biases just as monolingual models do.
We propose a novel benchmark, MAGBIG, intended to foster research on gender bias in multilingual models.
Our results show that not only do models exhibit strong gender biases but they also behave differently across languages.
arXiv Detail & Related papers (2024-01-29T12:02:28Z) - Gender Inflected or Bias Inflicted: On Using Grammatical Gender Cues for
Bias Evaluation in Machine Translation [0.0]
We use Hindi as the source language and construct two sets of gender-specific sentences to evaluate different Hindi-English (HI-EN) NMT systems.
Our work highlights the importance of considering the nature of language when designing such extrinsic bias evaluation datasets.
arXiv Detail & Related papers (2023-11-07T07:09:59Z) - DiFair: A Benchmark for Disentangled Assessment of Gender Knowledge and
Bias [13.928591341824248]
Debiasing techniques have been proposed to mitigate the gender bias that is prevalent in pretrained language models.
These are often evaluated on datasets that check the extent to which the model is gender-neutral in its predictions.
This evaluation protocol overlooks the possible adverse impact of bias mitigation on useful gender knowledge.
arXiv Detail & Related papers (2023-10-22T15:27:16Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Evaluating Gender Bias in Hindi-English Machine Translation [0.1503974529275767]
We implement a modified version of the TGBI metric based on the grammatical considerations for Hindi.
We compare and contrast the resulting bias measurements across multiple metrics for pre-trained embeddings and the ones learned by our machine translation model.
arXiv Detail & Related papers (2021-06-16T10:35:51Z) - Quantifying Gender Bias Towards Politicians in Cross-Lingual Language
Models [104.41668491794974]
We quantify the usage of adjectives and verbs generated by language models surrounding the names of politicians as a function of their gender.
We find that while some words such as dead, and designated are associated with both male and female politicians, a few specific words such as beautiful and divorced are predominantly associated with female politicians.
arXiv Detail & Related papers (2021-04-15T15:03:26Z) - They, Them, Theirs: Rewriting with Gender-Neutral English [56.14842450974887]
We perform a case study on the singular they, a common way to promote gender inclusion in English.
We show how a model can be trained to produce gender-neutral English with 1% word error rate with no human-labeled data.
arXiv Detail & Related papers (2021-02-12T21:47:48Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.