Language, communication and society: a gender based linguistics analysis
- URL: http://arxiv.org/abs/2007.06908v1
- Date: Tue, 14 Jul 2020 08:38:37 GMT
- Title: Language, communication and society: a gender based linguistics analysis
- Authors: P. Cutugno, D. Chiarella, R. Lucentini, L. Marconi and G. Morgavi
- Abstract summary: The purpose of this study is to find evidence for supporting the hypothesis that language is the mirror of our thinking.
The answers have been analysed to see if gender stereotypes were present such as the attribution of psychological and behavioural characteristics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The purpose of this study is to find evidence for supporting the hypothesis
that language is the mirror of our thinking, our prejudices and cultural
stereotypes. In this analysis, a questionnaire was administered to 537 people.
The answers have been analysed to see if gender stereotypes were present such
as the attribution of psychological and behavioural characteristics. In
particular, the aim was to identify, if any, what are the stereotyped images,
which emerge in defining the roles of men and women in modern society.
Moreover, the results given can be a good starting point to understand if
gender stereotypes, and the expectations they produce, can result in
penalization or inequality. If so, the language and its use would create
inherently a gender bias, which influences evaluations both in work settings
both in everyday life.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Revisiting The Classics: A Study on Identifying and Rectifying Gender Stereotypes in Rhymes and Poems [0.0]
The work contributes by gathering a dataset of rhymes and poems to identify gender stereotypes and propose a model with 97% accuracy to identify gender bias.
Gender stereotypes were rectified using a Large Language Model (LLM) and its effectiveness was evaluated in a comparative survey against human educator rectifications.
arXiv Detail & Related papers (2024-03-18T13:02:02Z) - A Multilingual Perspective on Probing Gender Bias [0.0]
Gender bias is a form of systematic negative treatment that targets individuals based on their gender.
This thesis investigates the nuances of how gender bias is expressed through language and within language technologies.
arXiv Detail & Related papers (2024-03-15T21:35:21Z) - Quantifying Stereotypes in Language [6.697298321551588]
We quantify stereotypes in language by annotating a dataset.
We use the pre-trained language models (PLMs) to learn this dataset to predict stereotypes of sentences.
We discuss stereotypes about common social issues such as hate speech, sexism, sentiments, and disadvantaged and advantaged groups.
arXiv Detail & Related papers (2024-01-28T01:07:21Z) - The Causal Influence of Grammatical Gender on Distributional Semantics [87.8027818528463]
How much meaning influences gender assignment across languages is an active area of research in linguistics and cognitive science.
We offer a novel, causal graphical model that jointly represents the interactions between a noun's grammatical gender, its meaning, and adjective choice.
When we control for the meaning of the noun, the relationship between grammatical gender and adjective choice is near zero and insignificant.
arXiv Detail & Related papers (2023-11-30T13:58:13Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Analysis of Male and Female Speakers' Word Choices in Public Speeches [0.0]
We compared the word choices of male and female presenters in public addresses such as TED lectures.
Based on our data, we determined that male speakers use specific types of linguistic, psychological, cognitive, and social words in considerably greater frequency than female speakers.
arXiv Detail & Related papers (2022-11-11T17:30:28Z) - Analyzing Gender Representation in Multilingual Models [59.21915055702203]
We focus on the representation of gender distinctions as a practical case study.
We examine the extent to which the gender concept is encoded in shared subspaces across different languages.
arXiv Detail & Related papers (2022-04-20T00:13:01Z) - They, Them, Theirs: Rewriting with Gender-Neutral English [56.14842450974887]
We perform a case study on the singular they, a common way to promote gender inclusion in English.
We show how a model can be trained to produce gender-neutral English with 1% word error rate with no human-labeled data.
arXiv Detail & Related papers (2021-02-12T21:47:48Z) - Pick a Fight or Bite your Tongue: Investigation of Gender Differences in
Idiomatic Language Usage [9.892162266128306]
We compile a novel, large and diverse corpus of spontaneous linguistic productions annotated with speakers' gender.
We perform a first large-scale empirical study of distinctions in the usage of textitfigurative language between male and female authors.
arXiv Detail & Related papers (2020-10-31T18:44:07Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.