Pick a Fight or Bite your Tongue: Investigation of Gender Differences in
Idiomatic Language Usage
- URL: http://arxiv.org/abs/2011.00335v1
- Date: Sat, 31 Oct 2020 18:44:07 GMT
- Title: Pick a Fight or Bite your Tongue: Investigation of Gender Differences in
Idiomatic Language Usage
- Authors: Ella Rabinovich, Hila Gonen and Suzanne Stevenson
- Abstract summary: We compile a novel, large and diverse corpus of spontaneous linguistic productions annotated with speakers' gender.
We perform a first large-scale empirical study of distinctions in the usage of textitfigurative language between male and female authors.
- Score: 9.892162266128306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A large body of research on gender-linked language has established
foundations regarding cross-gender differences in lexical, emotional, and
topical preferences, along with their sociological underpinnings. We compile a
novel, large and diverse corpus of spontaneous linguistic productions annotated
with speakers' gender, and perform a first large-scale empirical study of
distinctions in the usage of \textit{figurative language} between male and
female authors. Our analyses suggest that (1) idiomatic choices reflect
gender-specific lexical and semantic preferences in general language, (2) men's
and women's idiomatic usages express higher emotion than their literal
language, with detectable, albeit more subtle, differences between male and
female authors along the dimension of dominance compared to similar
distinctions in their literal utterances, and (3) contextual analysis of
idiomatic expressions reveals considerable differences, reflecting subtle
divergences in usage environments, shaped by cross-gender communication styles
and semantic biases.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - What an Elegant Bridge: Multilingual LLMs are Biased Similarly in Different Languages [51.0349882045866]
This paper investigates biases of Large Language Models (LLMs) through the lens of grammatical gender.
We prompt a model to describe nouns with adjectives in various languages, focusing specifically on languages with grammatical gender.
We find that a simple classifier can not only predict noun gender above chance but also exhibit cross-language transferability.
arXiv Detail & Related papers (2024-07-12T22:10:16Z) - Leveraging Large Language Models to Measure Gender Bias in Gendered Languages [9.959039325564744]
This paper introduces a novel methodology that leverages the contextual understanding capabilities of large language models (LLMs) to quantitatively analyze gender representation in Spanish corpora.
We empirically validate our method on four widely-used benchmark datasets, uncovering significant gender disparities with a male-to-female ratio ranging from 4:01.
arXiv Detail & Related papers (2024-06-19T16:30:58Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Analysis of Male and Female Speakers' Word Choices in Public Speeches [0.0]
We compared the word choices of male and female presenters in public addresses such as TED lectures.
Based on our data, we determined that male speakers use specific types of linguistic, psychological, cognitive, and social words in considerably greater frequency than female speakers.
arXiv Detail & Related papers (2022-11-11T17:30:28Z) - Analyzing Gender Representation in Multilingual Models [59.21915055702203]
We focus on the representation of gender distinctions as a practical case study.
We examine the extent to which the gender concept is encoded in shared subspaces across different languages.
arXiv Detail & Related papers (2022-04-20T00:13:01Z) - Under the Morphosyntactic Lens: A Multifaceted Evaluation of Gender Bias
in Speech Translation [20.39599469927542]
Gender bias is largely recognized as a problematic phenomenon affecting language technologies.
Most of current evaluation practices adopt a word-level focus on a narrow set of occupational nouns under synthetic conditions.
Such protocols overlook key features of grammatical gender languages, which are characterized by morphosyntactic chains of gender agreement.
arXiv Detail & Related papers (2022-03-18T11:14:16Z) - Exploration of Gender Differences in COVID-19 Discourse on Reddit [4.402655234271756]
We show that gender-linked affective distinctions are amplified in social media postings involving emotionally-charged discourse related to COVID-19.
Our analysis also confirms considerable differences in topical preferences between male and female authors in spontaneous pandemic-related discussions.
arXiv Detail & Related papers (2020-08-13T06:29:24Z) - Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer [101.58431011820755]
We study gender bias in multilingual embeddings and how it affects transfer learning for NLP applications.
We create a multilingual dataset for bias analysis and propose several ways for quantifying bias in multilingual representations.
arXiv Detail & Related papers (2020-05-02T04:34:37Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.