Theories of "Gender" in NLP Bias Research
- URL: http://arxiv.org/abs/2205.02526v1
- Date: Thu, 5 May 2022 09:20:53 GMT
- Title: Theories of "Gender" in NLP Bias Research
- Authors: Hannah Devinney, Jenny Bj\"orklund, Henrik Bj\"orklund
- Abstract summary: We survey nearly 200 articles concerning gender bias in NLP.
We find that the majority of the articles do not make their theorization of gender explicit.
Many conflate sex characteristics, social gender, and linguistic gender in ways that disregard the existence and experience of trans, nonbinary, and intersex people.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rise of concern around Natural Language Processing (NLP) technologies
containing and perpetuating social biases has led to a rich and rapidly growing
area of research. Gender bias is one of the central biases being analyzed, but
to date there is no comprehensive analysis of how "gender" is theorized in the
field. We survey nearly 200 articles concerning gender bias in NLP to discover
how the field conceptualizes gender both explicitly (e.g. through definitions
of terms) and implicitly (e.g. through how gender is operationalized in
practice). In order to get a better idea of emerging trajectories of thought,
we split these articles into two sections by time.
We find that the majority of the articles do not make their theorization of
gender explicit, even if they clearly define "bias." Almost none use a model of
gender that is intersectional or inclusive of nonbinary genders; and many
conflate sex characteristics, social gender, and linguistic gender in ways that
disregard the existence and experience of trans, nonbinary, and intersex
people. There is an increase between the two time-sections in statements
acknowledging that gender is a complicated reality, however, very few articles
manage to put this acknowledgment into practice. In addition to analyzing these
findings, we provide specific recommendations to facilitate interdisciplinary
work, and to incorporate theory and methodology from Gender Studies. Our hope
is that this will produce more inclusive gender bias research in NLP.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Probing Explicit and Implicit Gender Bias through LLM Conditional Text
Generation [64.79319733514266]
Large Language Models (LLMs) can generate biased and toxic responses.
We propose a conditional text generation mechanism without the need for predefined gender phrases and stereotypes.
arXiv Detail & Related papers (2023-11-01T05:31:46Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Gender, names and other mysteries: Towards the ambiguous for
gender-inclusive translation [7.322734499960981]
This paper explores the case where the source sentence lacks explicit gender markers, but the target sentence contains them due to richer grammatical gender.
We find that many name-gender co-occurrences in MT data are not resolvable with 'unambiguous gender' in the source language.
We discuss potential steps toward gender-inclusive translation which accepts the ambiguity in both gender and translation.
arXiv Detail & Related papers (2023-06-07T16:21:59Z) - "I'm fully who I am": Towards Centering Transgender and Non-Binary
Voices to Measure Biases in Open Language Generation [69.25368160338043]
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
We assess how the social reality surrounding experienced marginalization of TGNB persons contributes to and persists within Open Language Generation.
We introduce TANGO, a dataset of template-based real-world text curated from a TGNB-oriented community.
arXiv Detail & Related papers (2023-05-17T04:21:45Z) - Much Ado About Gender: Current Practices and Future Recommendations for
Appropriate Gender-Aware Information Access [3.3903891679981593]
Information access research (and development) sometimes makes use of gender.
This work makes a variety of assumptions about gender that are not aligned with current understandings of what gender is.
Most papers we review rely on a binary notion of gender, even if they acknowledge that gender cannot be split into two categories.
arXiv Detail & Related papers (2023-01-12T01:21:02Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - A Survey on Gender Bias in Natural Language Processing [22.91475787277623]
We present a survey of 304 papers on gender bias in natural language processing.
We compare and contrast approaches to detecting and mitigating gender bias.
We find that research on gender bias suffers from four core limitations.
arXiv Detail & Related papers (2021-12-28T14:54:18Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.