MISGENDERED: Limits of Large Language Models in Understanding Pronouns
- URL: http://arxiv.org/abs/2306.03950v2
- Date: Fri, 7 Jul 2023 05:18:34 GMT
- Title: MISGENDERED: Limits of Large Language Models in Understanding Pronouns
- Authors: Tamanna Hossain, Sunipa Dev, Sameer Singh
- Abstract summary: We evaluate popular language models for their ability to correctly use English gender-neutral pronouns.
We introduce MISGENDERED, a framework for evaluating large language models' ability to correctly use preferred pronouns.
- Score: 46.276320374441056
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Content Warning: This paper contains examples of misgendering and erasure
that could be offensive and potentially triggering.
Gender bias in language technologies has been widely studied, but research
has mostly been restricted to a binary paradigm of gender. It is essential also
to consider non-binary gender identities, as excluding them can cause further
harm to an already marginalized group. In this paper, we comprehensively
evaluate popular language models for their ability to correctly use English
gender-neutral pronouns (e.g., singular they, them) and neo-pronouns (e.g., ze,
xe, thon) that are used by individuals whose gender identity is not represented
by binary pronouns. We introduce MISGENDERED, a framework for evaluating large
language models' ability to correctly use preferred pronouns, consisting of (i)
instances declaring an individual's pronoun, followed by a sentence with a
missing pronoun, and (ii) an experimental setup for evaluating masked and
auto-regressive language models using a unified method. When prompted
out-of-the-box, language models perform poorly at correctly predicting
neo-pronouns (averaging 7.7% accuracy) and gender-neutral pronouns (averaging
34.2% accuracy). This inability to generalize results from a lack of
representation of non-binary pronouns in training data and memorized
associations. Few-shot adaptation with explicit examples in the prompt improves
performance for neo-pronouns, but only to 64.7% even with 20 shots. We release
the full dataset, code, and demo at
https://tamannahossainkay.github.io/misgendered/
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Building Bridges: A Dataset for Evaluating Gender-Fair Machine Translation into German [17.924716793621627]
We study gender-fair language in English-to-German machine translation (MT)
We conduct the first benchmark study involving two commercial systems and six neural MT models.
Our findings show that most systems produce mainly masculine forms and rarely gender-neutral variants.
arXiv Detail & Related papers (2024-06-10T09:39:19Z) - Transforming Dutch: Debiasing Dutch Coreference Resolution Systems for Non-binary Pronouns [5.5514102920271196]
Gender-neutral pronouns are increasingly being introduced across Western languages.
Recent evaluations have demonstrated that English NLP systems are unable to correctly process gender-neutral pronouns.
This paper examines a Dutch coreference resolution system's performance on gender-neutral pronouns.
arXiv Detail & Related papers (2024-04-30T18:31:19Z) - Tokenization Matters: Navigating Data-Scarce Tokenization for Gender Inclusive Language Technologies [75.85462924188076]
Gender-inclusive NLP research has documented the harmful limitations of gender binary-centric large language models (LLM)
We find that misgendering is significantly influenced by Byte-Pair (BPE) tokenization.
We propose two techniques: (1) pronoun tokenization parity, a method to enforce consistent tokenization across gendered pronouns, and (2) utilizing pre-existing LLM pronoun knowledge to improve neopronoun proficiency.
arXiv Detail & Related papers (2023-12-19T01:28:46Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - "I'm fully who I am": Towards Centering Transgender and Non-Binary
Voices to Measure Biases in Open Language Generation [69.25368160338043]
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
We assess how the social reality surrounding experienced marginalization of TGNB persons contributes to and persists within Open Language Generation.
We introduce TANGO, a dataset of template-based real-world text curated from a TGNB-oriented community.
arXiv Detail & Related papers (2023-05-17T04:21:45Z) - How Conservative are Language Models? Adapting to the Introduction of
Gender-Neutral Pronouns [0.15293427903448023]
We show that gender-neutral pronouns (in Swedish) are not associated with human processing difficulties.
We show that gender-neutral pronouns in Danish, English, and Swedish are associated with higher perplexity, more dispersed attention patterns, and worse downstream performance.
arXiv Detail & Related papers (2022-04-11T09:42:02Z) - Welcome to the Modern World of Pronouns: Identity-Inclusive Natural
Language Processing beyond Gender [23.92148222207458]
We provide an overview of 3rd person pronoun issues for Natural Language Processing.
We evaluate existing and novel modeling approaches.
We quantify the impact of a more discrimination-free approach on established benchmark data.
arXiv Detail & Related papers (2022-02-24T06:42:11Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.