Revisiting The Classics: A Study on Identifying and Rectifying Gender Stereotypes in Rhymes and Poems
- URL: http://arxiv.org/abs/2403.11752v2
- Date: Mon, 25 Mar 2024 16:33:12 GMT
- Title: Revisiting The Classics: A Study on Identifying and Rectifying Gender Stereotypes in Rhymes and Poems
- Authors: Aditya Narayan Sankaran, Vigneshwaran Shankaran, Sampath Lonka, Rajesh Sharma,
- Abstract summary: The work contributes by gathering a dataset of rhymes and poems to identify gender stereotypes and propose a model with 97% accuracy to identify gender bias.
Gender stereotypes were rectified using a Large Language Model (LLM) and its effectiveness was evaluated in a comparative survey against human educator rectifications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rhymes and poems are a powerful medium for transmitting cultural norms and societal roles. However, the pervasive existence of gender stereotypes in these works perpetuates biased perceptions and limits the scope of individuals' identities. Past works have shown that stereotyping and prejudice emerge in early childhood, and developmental research on causal mechanisms is critical for understanding and controlling stereotyping and prejudice. This work contributes by gathering a dataset of rhymes and poems to identify gender stereotypes and propose a model with 97% accuracy to identify gender bias. Gender stereotypes were rectified using a Large Language Model (LLM) and its effectiveness was evaluated in a comparative survey against human educator rectifications. To summarize, this work highlights the pervasive nature of gender stereotypes in literary works and reveals the potential of LLMs to rectify gender stereotypes. This study raises awareness and promotes inclusivity within artistic expressions, making a significant contribution to the discourse on gender equality.
Related papers
- Unveiling Gender Bias in Large Language Models: Using Teacher's Evaluation in Higher Education As an Example [0.0]
This paper investigates gender bias in Large Language Model (LLM)-generated teacher evaluations in higher education setting.
It applies a comprehensive analytical framework that includes Odds Ratio (OR) analysis, Word Embedding Association Test (WEAT), sentiment analysis, and contextual analysis.
Specifically, words related to approachability and support were used more frequently for female instructors, while words related to entertainment were predominantly used for male instructors.
arXiv Detail & Related papers (2024-09-15T07:50:33Z) - Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Challenging Negative Gender Stereotypes: A Study on the Effectiveness of Automated Counter-Stereotypes [12.704072523930444]
This study investigates eleven strategies to automatically counter-act and challenge gender stereotypes in online communications.
We present AI-generated gender-based counter-stereotypes to study participants and ask them to assess their offensiveness, plausibility, and potential effectiveness.
arXiv Detail & Related papers (2024-04-18T01:48:28Z) - The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework, which queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
PST queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
Using PST, we evaluate two aspects of gender biases -- the well-known bias in gendered occupation and a novel aspect: bias in organizational power.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts [87.62403265382734]
Recent studies show that traditional fairytales are rife with harmful gender biases.
This work aims to assess learned biases of language models by evaluating their robustness against gender perturbations.
arXiv Detail & Related papers (2023-10-16T22:25:09Z) - Stereotypes and Smut: The (Mis)representation of Non-cisgender
Identities by Text-to-Image Models [6.92043136971035]
We investigate how multimodal models handle diverse gender identities.
We find certain non-cisgender identities are consistently (mis)represented as less human, more stereotyped and more sexualised.
These improvements could pave the way for a future where change is led by the affected community.
arXiv Detail & Related papers (2023-05-26T16:28:49Z) - A Moral- and Event- Centric Inspection of Gender Bias in Fairy Tales at
A Large Scale [50.92540580640479]
We computationally analyze gender bias in a fairy tale dataset containing 624 fairy tales from 7 different cultures.
We find that the number of male characters is two times that of female characters, showing a disproportionate gender representation.
Female characters turn out more associated with care-, loyalty- and sanctity- related moral words, while male characters are more associated with fairness- and authority- related moral words.
arXiv Detail & Related papers (2022-11-25T19:38:09Z) - Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale [61.555788332182395]
We investigate the potential for machine learning models to amplify dangerous and complex stereotypes.
We find a broad range of ordinary prompts produce stereotypes, including prompts simply mentioning traits, descriptors, occupations, or objects.
arXiv Detail & Related papers (2022-11-07T18:31:07Z) - Understanding and Countering Stereotypes: A Computational Approach to
the Stereotype Content Model [4.916009028580767]
We present a computational approach to interpreting stereotypes in text through the Stereotype Content Model (SCM)
The SCM proposes that stereotypes can be understood along two primary dimensions: warmth and competence.
It is known that countering stereotypes with anti-stereotypical examples is one of the most effective ways to reduce biased thinking.
arXiv Detail & Related papers (2021-06-04T16:53:37Z) - Language, communication and society: a gender based linguistics analysis [0.0]
The purpose of this study is to find evidence for supporting the hypothesis that language is the mirror of our thinking.
The answers have been analysed to see if gender stereotypes were present such as the attribution of psychological and behavioural characteristics.
arXiv Detail & Related papers (2020-07-14T08:38:37Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.