Grammatical gender associations outweigh topical gender bias in
crosslinguistic word embeddings
- URL: http://arxiv.org/abs/2005.08864v1
- Date: Mon, 18 May 2020 16:39:16 GMT
- Title: Grammatical gender associations outweigh topical gender bias in
crosslinguistic word embeddings
- Authors: Katherine McCurdy and Oguz Serbetci
- Abstract summary: Crosslinguistic word embeddings reveal that topical gender bias interacts with, and is surpassed in magnitude by, the effect of grammatical gender associations.
This finding has implications for downstream applications such as machine translation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research has demonstrated that vector space models of semantics can
reflect undesirable biases in human culture. Our investigation of
crosslinguistic word embeddings reveals that topical gender bias interacts
with, and is surpassed in magnitude by, the effect of grammatical gender
associations, and both may be attenuated by corpus lemmatization. This finding
has implications for downstream applications such as machine translation.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - What an Elegant Bridge: Multilingual LLMs are Biased Similarly in Different Languages [51.0349882045866]
This paper investigates biases of Large Language Models (LLMs) through the lens of grammatical gender.
We prompt a model to describe nouns with adjectives in various languages, focusing specifically on languages with grammatical gender.
We find that a simple classifier can not only predict noun gender above chance but also exhibit cross-language transferability.
arXiv Detail & Related papers (2024-07-12T22:10:16Z) - The Causal Influence of Grammatical Gender on Distributional Semantics [87.8027818528463]
How much meaning influences gender assignment across languages is an active area of research in linguistics and cognitive science.
We offer a novel, causal graphical model that jointly represents the interactions between a noun's grammatical gender, its meaning, and adjective choice.
When we control for the meaning of the noun, the relationship between grammatical gender and adjective choice is near zero and insignificant.
arXiv Detail & Related papers (2023-11-30T13:58:13Z) - Exploring the Impact of Training Data Distribution and Subword
Tokenization on Gender Bias in Machine Translation [19.719314005149883]
We study the effect of tokenization on gender bias in machine translation.
We observe that female and non-stereotypical gender inflections of profession names tend to be split into subword tokens.
We show that analyzing subword splits provides good estimates of gender-form imbalance in the training data.
arXiv Detail & Related papers (2023-09-21T21:21:55Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Measuring Gender Bias in Word Embeddings of Gendered Languages Requires
Disentangling Grammatical Gender Signals [3.0349733976070015]
We demonstrate that word embeddings learn the association between a noun and its grammatical gender in grammatically gendered languages.
We show that disentangling grammatical gender signals from word embeddings may lead to improvement in semantic machine learning tasks.
arXiv Detail & Related papers (2022-06-03T17:11:00Z) - Gender Bias Hidden Behind Chinese Word Embeddings: The Case of Chinese
Adjectives [0.0]
This paper investigates gender bias in static word embeddings from a unique perspective, Chinese adjectives.
Through a comparison between the produced results and a human-scored data set, we demonstrate how gender bias encoded in word embeddings differentiates from people's attitudes.
arXiv Detail & Related papers (2021-06-01T02:12:45Z) - Pick a Fight or Bite your Tongue: Investigation of Gender Differences in
Idiomatic Language Usage [9.892162266128306]
We compile a novel, large and diverse corpus of spontaneous linguistic productions annotated with speakers' gender.
We perform a first large-scale empirical study of distinctions in the usage of textitfigurative language between male and female authors.
arXiv Detail & Related papers (2020-10-31T18:44:07Z) - An exploration of the encoding of grammatical gender in word embeddings [0.6461556265872973]
The study of grammatical gender based on word embeddings can give insight into discussions on how grammatical genders are determined.
It is found that there is an overlap in how grammatical gender is encoded in Swedish, Danish, and Dutch embeddings.
arXiv Detail & Related papers (2020-08-05T06:01:46Z) - Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation [94.98656228690233]
We propose a technique that purifies the word embeddings against corpus regularities prior to inferring and removing the gender subspace.
Our approach preserves the distributional semantics of the pre-trained word embeddings while reducing gender bias to a significantly larger degree than prior approaches.
arXiv Detail & Related papers (2020-05-03T02:33:20Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.