Transcending the "Male Code": Implicit Masculine Biases in NLP Contexts
- URL: http://arxiv.org/abs/2304.12810v1
- Date: Sat, 22 Apr 2023 03:53:24 GMT
- Title: Transcending the "Male Code": Implicit Masculine Biases in NLP Contexts
- Authors: Katie Seaborn, Shruti Chandra, Thibault Fabre
- Abstract summary: Critical scholarship has elevated the problem of gender bias in data sets used to train virtual assistants (VAs)
We examined two natural language processing (NLP) data sets.
We found that when gendered language was present, so were gender biases and especially masculine biases.
We offer a new dictionary called AVA that covers ambiguous associations between gendered language and the language of VAs.
- Score: 10.036312061637764
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Critical scholarship has elevated the problem of gender bias in data sets
used to train virtual assistants (VAs). Most work has focused on explicit
biases in language, especially against women, girls, femme-identifying people,
and genderqueer folk; implicit associations through word embeddings; and
limited models of gender and masculinities, especially toxic masculinities,
conflation of sex and gender, and a sex/gender binary framing of the masculine
as diametric to the feminine. Yet, we must also interrogate how masculinities
are "coded" into language and the assumption of "male" as the linguistic
default: implicit masculine biases. To this end, we examined two natural
language processing (NLP) data sets. We found that when gendered language was
present, so were gender biases and especially masculine biases. Moreover, these
biases related in nuanced ways to the NLP context. We offer a new dictionary
called AVA that covers ambiguous associations between gendered language and the
language of VAs.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - What an Elegant Bridge: Multilingual LLMs are Biased Similarly in Different Languages [51.0349882045866]
This paper investigates biases of Large Language Models (LLMs) through the lens of grammatical gender.
We prompt a model to describe nouns with adjectives in various languages, focusing specifically on languages with grammatical gender.
We find that a simple classifier can not only predict noun gender above chance but also exhibit cross-language transferability.
arXiv Detail & Related papers (2024-07-12T22:10:16Z) - Building Bridges: A Dataset for Evaluating Gender-Fair Machine Translation into German [17.924716793621627]
We study gender-fair language in English-to-German machine translation (MT)
We conduct the first benchmark study involving two commercial systems and six neural MT models.
Our findings show that most systems produce mainly masculine forms and rarely gender-neutral variants.
arXiv Detail & Related papers (2024-06-10T09:39:19Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Gender, names and other mysteries: Towards the ambiguous for
gender-inclusive translation [7.322734499960981]
This paper explores the case where the source sentence lacks explicit gender markers, but the target sentence contains them due to richer grammatical gender.
We find that many name-gender co-occurrences in MT data are not resolvable with 'unambiguous gender' in the source language.
We discuss potential steps toward gender-inclusive translation which accepts the ambiguity in both gender and translation.
arXiv Detail & Related papers (2023-06-07T16:21:59Z) - Gender Lost In Translation: How Bridging The Gap Between Languages
Affects Gender Bias in Zero-Shot Multilingual Translation [12.376309678270275]
bridging the gap between languages for which parallel data is not available affects gender bias in multilingual NMT.
We study the effect of encouraging language-agnostic hidden representations on models' ability to preserve gender.
We find that language-agnostic representations mitigate zero-shot models' masculine bias, and with increased levels of gender inflection in the bridge language, pivoting surpasses zero-shot translation regarding fairer gender preservation for speaker-related gender agreement.
arXiv Detail & Related papers (2023-05-26T13:51:50Z) - "I'm fully who I am": Towards Centering Transgender and Non-Binary
Voices to Measure Biases in Open Language Generation [69.25368160338043]
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
We assess how the social reality surrounding experienced marginalization of TGNB persons contributes to and persists within Open Language Generation.
We introduce TANGO, a dataset of template-based real-world text curated from a TGNB-oriented community.
arXiv Detail & Related papers (2023-05-17T04:21:45Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - How Conservative are Language Models? Adapting to the Introduction of
Gender-Neutral Pronouns [0.15293427903448023]
We show that gender-neutral pronouns (in Swedish) are not associated with human processing difficulties.
We show that gender-neutral pronouns in Danish, English, and Swedish are associated with higher perplexity, more dispersed attention patterns, and worse downstream performance.
arXiv Detail & Related papers (2022-04-11T09:42:02Z) - Gender in Danger? Evaluating Speech Translation Technology on the
MuST-SHE Corpus [20.766890957411132]
Translating from languages without productive grammatical gender like English into gender-marked languages is a well-known difficulty for machines.
Can audio provide additional information to reduce gender bias?
We present the first thorough investigation of gender bias in speech translation, contributing with the release of a benchmark useful for future studies.
arXiv Detail & Related papers (2020-06-10T09:55:38Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.