How Conservative are Language Models? Adapting to the Introduction of
Gender-Neutral Pronouns
- URL: http://arxiv.org/abs/2204.10281v1
- Date: Mon, 11 Apr 2022 09:42:02 GMT
- Title: How Conservative are Language Models? Adapting to the Introduction of
Gender-Neutral Pronouns
- Authors: Stephanie Brandl, Ruixiang Cui, Anders S{\o}gaard
- Abstract summary: We show that gender-neutral pronouns (in Swedish) are not associated with human processing difficulties.
We show that gender-neutral pronouns in Danish, English, and Swedish are associated with higher perplexity, more dispersed attention patterns, and worse downstream performance.
- Score: 0.15293427903448023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gender-neutral pronouns have recently been introduced in many languages to a)
include non-binary people and b) as a generic singular. Recent results from
psycho-linguistics suggest that gender-neutral pronouns (in Swedish) are not
associated with human processing difficulties. This, we show, is in sharp
contrast with automated processing. We show that gender-neutral pronouns in
Danish, English, and Swedish are associated with higher perplexity, more
dispersed attention patterns, and worse downstream performance. We argue that
such conservativity in language models may limit widespread adoption of
gender-neutral pronouns and must therefore be resolved.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - What an Elegant Bridge: Multilingual LLMs are Biased Similarly in Different Languages [51.0349882045866]
This paper investigates biases of Large Language Models (LLMs) through the lens of grammatical gender.
We prompt a model to describe nouns with adjectives in various languages, focusing specifically on languages with grammatical gender.
We find that a simple classifier can not only predict noun gender above chance but also exhibit cross-language transferability.
arXiv Detail & Related papers (2024-07-12T22:10:16Z) - Building Bridges: A Dataset for Evaluating Gender-Fair Machine Translation into German [17.924716793621627]
We study gender-fair language in English-to-German machine translation (MT)
We conduct the first benchmark study involving two commercial systems and six neural MT models.
Our findings show that most systems produce mainly masculine forms and rarely gender-neutral variants.
arXiv Detail & Related papers (2024-06-10T09:39:19Z) - Transforming Dutch: Debiasing Dutch Coreference Resolution Systems for Non-binary Pronouns [5.5514102920271196]
Gender-neutral pronouns are increasingly being introduced across Western languages.
Recent evaluations have demonstrated that English NLP systems are unable to correctly process gender-neutral pronouns.
This paper examines a Dutch coreference resolution system's performance on gender-neutral pronouns.
arXiv Detail & Related papers (2024-04-30T18:31:19Z) - The Causal Influence of Grammatical Gender on Distributional Semantics [87.8027818528463]
How much meaning influences gender assignment across languages is an active area of research in linguistics and cognitive science.
We offer a novel, causal graphical model that jointly represents the interactions between a noun's grammatical gender, its meaning, and adjective choice.
When we control for the meaning of the noun, the relationship between grammatical gender and adjective choice is near zero and insignificant.
arXiv Detail & Related papers (2023-11-30T13:58:13Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - MISGENDERED: Limits of Large Language Models in Understanding Pronouns [46.276320374441056]
We evaluate popular language models for their ability to correctly use English gender-neutral pronouns.
We introduce MISGENDERED, a framework for evaluating large language models' ability to correctly use preferred pronouns.
arXiv Detail & Related papers (2023-06-06T18:27:52Z) - What about em? How Commercial Machine Translation Fails to Handle
(Neo-)Pronouns [26.28827649737955]
Wrong pronoun translations can discriminate against marginalized groups, e.g., non-binary individuals.
We study how three commercial machine translation systems translate 3rd-person pronouns.
Our error analysis shows that the presence of a gender-neutral pronoun often leads to grammatical and semantic translation errors.
arXiv Detail & Related papers (2023-05-25T13:34:09Z) - Welcome to the Modern World of Pronouns: Identity-Inclusive Natural
Language Processing beyond Gender [23.92148222207458]
We provide an overview of 3rd person pronoun issues for Natural Language Processing.
We evaluate existing and novel modeling approaches.
We quantify the impact of a more discrimination-free approach on established benchmark data.
arXiv Detail & Related papers (2022-02-24T06:42:11Z) - They, Them, Theirs: Rewriting with Gender-Neutral English [56.14842450974887]
We perform a case study on the singular they, a common way to promote gender inclusion in English.
We show how a model can be trained to produce gender-neutral English with 1% word error rate with no human-labeled data.
arXiv Detail & Related papers (2021-02-12T21:47:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.