The Shared Task on Gender Rewriting
- URL: http://arxiv.org/abs/2210.12410v1
- Date: Sat, 22 Oct 2022 10:27:53 GMT
- Title: The Shared Task on Gender Rewriting
- Authors: Bashar Alhafni, Nizar Habash, Houda Bouamor, Ossama Obeid, Sultan
Alrowili, Daliyah Alzeer, Khawlah M. Alshanqiti, Ahmed ElBakry, Muhammad
ElNokrashy, Mohamed Gabr, Abderrahmane Issam, Abdelrahim Qaddoumi, K.
Vijay-Shanker, Mahmoud Zyate
- Abstract summary: The task of gender rewriting refers to generating alternatives of a given sentence to match different target user gender contexts.
This requires changing the grammatical gender of certain words referring to the users.
A total of five teams from four countries participated in the shared task.
- Score: 7.6676670534261175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present the results and findings of the Shared Task on
Gender Rewriting, which was organized as part of the Seventh Arabic Natural
Language Processing Workshop. The task of gender rewriting refers to generating
alternatives of a given sentence to match different target user gender contexts
(e.g., female speaker with a male listener, a male speaker with a male
listener, etc.). This requires changing the grammatical gender (masculine or
feminine) of certain words referring to the users. In this task, we focus on
Arabic, a gender-marking morphologically rich language. A total of five teams
from four countries participated in the shared task.
Related papers
- What an Elegant Bridge: Multilingual LLMs are Biased Similarly in Different Languages [51.0349882045866]
This paper investigates biases of Large Language Models (LLMs) through the lens of grammatical gender.
We prompt a model to describe nouns with adjectives in various languages, focusing specifically on languages with grammatical gender.
We find that a simple classifier can not only predict noun gender above chance but also exhibit cross-language transferability.
arXiv Detail & Related papers (2024-07-12T22:10:16Z) - The Causal Influence of Grammatical Gender on Distributional Semantics [87.8027818528463]
How much meaning influences gender assignment across languages is an active area of research in linguistics and cognitive science.
We offer a novel, causal graphical model that jointly represents the interactions between a noun's grammatical gender, its meaning, and adjective choice.
When we control for the meaning of the noun, the relationship between grammatical gender and adjective choice is near zero and insignificant.
arXiv Detail & Related papers (2023-11-30T13:58:13Z) - How To Build Competitive Multi-gender Speech Translation Models For
Controlling Speaker Gender Translation [21.125217707038356]
When translating from notional gender languages into grammatical gender languages, the generated translation requires explicit gender assignments for various words, including those referring to the speaker.
To avoid such biased and not inclusive behaviors, the gender assignment of speaker-related expressions should be guided by externally-provided metadata about the speaker's gender.
This paper aims to achieve the same results by integrating the speaker's gender metadata into a single "multi-gender" neural ST model, easier to maintain.
arXiv Detail & Related papers (2023-10-23T17:21:32Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Generating Multilingual Gender-Ambiguous Text-to-Speech Voices [4.005334718121374]
This work addresses the task of generating novel gender-ambiguous TTS voices in a multi-speaker, multilingual setting.
To our knowledge, this is the first systematic and validated approach that can reliably generate a variety of gender-ambiguous voices.
arXiv Detail & Related papers (2022-11-01T10:40:24Z) - Measuring Gender Bias in Word Embeddings of Gendered Languages Requires
Disentangling Grammatical Gender Signals [3.0349733976070015]
We demonstrate that word embeddings learn the association between a noun and its grammatical gender in grammatically gendered languages.
We show that disentangling grammatical gender signals from word embeddings may lead to improvement in semantic machine learning tasks.
arXiv Detail & Related papers (2022-06-03T17:11:00Z) - User-Centric Gender Rewriting [12.519348416773553]
We define the task of gender rewriting in contexts involving two users (I and/or You)
We develop a multi-step system that combines the positive aspects of both rule-based and neural rewriting models.
Our results successfully demonstrate the viability of this approach on a recently created corpus for Arabic gender rewriting.
arXiv Detail & Related papers (2022-05-04T17:46:17Z) - Analyzing Gender Representation in Multilingual Models [59.21915055702203]
We focus on the representation of gender distinctions as a practical case study.
We examine the extent to which the gender concept is encoded in shared subspaces across different languages.
arXiv Detail & Related papers (2022-04-20T00:13:01Z) - The Arabic Parallel Gender Corpus 2.0: Extensions and Analyses [17.253633576291897]
We introduce a new corpus for gender identification and rewriting in contexts involving one or two target users.
We focus on Arabic, a gender-marking morphologically rich language.
arXiv Detail & Related papers (2021-10-18T12:06:17Z) - They, Them, Theirs: Rewriting with Gender-Neutral English [56.14842450974887]
We perform a case study on the singular they, a common way to promote gender inclusion in English.
We show how a model can be trained to produce gender-neutral English with 1% word error rate with no human-labeled data.
arXiv Detail & Related papers (2021-02-12T21:47:48Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.