A Moral- and Event- Centric Inspection of Gender Bias in Fairy Tales at
A Large Scale
- URL: http://arxiv.org/abs/2211.14358v1
- Date: Fri, 25 Nov 2022 19:38:09 GMT
- Title: A Moral- and Event- Centric Inspection of Gender Bias in Fairy Tales at
A Large Scale
- Authors: Zhixuan Zhou, Jiao Sun, Jiaxin Pei, Nanyun Peng and Jinjun Xiong
- Abstract summary: We computationally analyze gender bias in a fairy tale dataset containing 624 fairy tales from 7 different cultures.
We find that the number of male characters is two times that of female characters, showing a disproportionate gender representation.
Female characters turn out more associated with care-, loyalty- and sanctity- related moral words, while male characters are more associated with fairness- and authority- related moral words.
- Score: 50.92540580640479
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairy tales are a common resource for young children to learn a language or
understand how a society works. However, gender bias, e.g., stereotypical
gender roles, in this literature may cause harm and skew children's world view.
Instead of decades of qualitative and manual analysis of gender bias in fairy
tales, we computationally analyze gender bias in a fairy tale dataset
containing 624 fairy tales from 7 different cultures. We specifically examine
gender difference in terms of moral foundations, which are measures of human
morality, and events, which reveal human activities associated with each
character. We find that the number of male characters is two times that of
female characters, showing a disproportionate gender representation. Our
analysis further reveal stereotypical portrayals of both male and female
characters in terms of moral foundations and events. Female characters turn out
more associated with care-, loyalty- and sanctity- related moral words, while
male characters are more associated with fairness- and authority- related moral
words. Female characters' events are often about emotion (e.g., weep),
appearance (e.g., comb), household (e.g., bake), etc.; while male characters'
events are more about profession (e.g., hunt), violence (e.g., destroy),
justice (e.g., judge), etc. Gender bias in terms of moral foundations shows an
obvious difference across cultures. For example, female characters are more
associated with care and sanctity in high uncertainty-avoidance cultures which
are less open to changes and unpredictability. Based on the results, we propose
implications for children's literature and early literacy research.
Related papers
- Computational Analysis of Gender Depiction in the Comedias of Calderón de la Barca [6.978406757882009]
We develop methods to study gender depiction in the non-religious works (comedias) of Pedro Calder'on de la Barca.
We gather insights from a corpus of more than 100 plays by using a gender classifier and applying model explainability (attribution) methods.
We find that female and male characters are portrayed differently and can be identified by the gender prediction model at practically useful accuracies.
arXiv Detail & Related papers (2024-11-06T13:13:33Z) - Moral Judgments in Online Discourse are not Biased by Gender [3.2771631221674333]
We use data from r/AITA, a Reddit community with 17 million members who share first-hand experiences seeking community judgment on their behavior.
We find no direct causal effect of the protagonist's gender on the received moral judgments.
Our findings complement existing correlational studies and suggest that gender roles may exert greater influence in specific social contexts.
arXiv Detail & Related papers (2024-08-23T07:10:48Z) - The Causal Influence of Grammatical Gender on Distributional Semantics [87.8027818528463]
How much meaning influences gender assignment across languages is an active area of research in linguistics and cognitive science.
We offer a novel, causal graphical model that jointly represents the interactions between a noun's grammatical gender, its meaning, and adjective choice.
When we control for the meaning of the noun, the relationship between grammatical gender and adjective choice is near zero and insignificant.
arXiv Detail & Related papers (2023-11-30T13:58:13Z) - Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts [87.62403265382734]
Recent studies show that traditional fairytales are rife with harmful gender biases.
This work aims to assess learned biases of language models by evaluating their robustness against gender perturbations.
arXiv Detail & Related papers (2023-10-16T22:25:09Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Are Fairy Tales Fair? Analyzing Gender Bias in Temporal Narrative Event
Chains of Children's Fairy Tales [46.65377334112404]
Social biases and stereotypes are embedded in our culture in part through their presence in our stories.
We propose a computational pipeline that automatically extracts a story's temporal narrative verb-based event chain for each of its characters.
We also present a verb-based event annotation scheme that can facilitate bias analysis by including categories such as those that align with traditional stereotypes.
arXiv Detail & Related papers (2023-05-26T05:29:37Z) - Uncovering Implicit Gender Bias in Narratives through Commonsense
Inference [21.18458377708873]
We study gender biases associated with the protagonist in model-generated stories.
We focus on implicit biases, and use a commonsense reasoning engine to uncover them.
arXiv Detail & Related papers (2021-09-14T04:57:45Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.