Computational Analysis of Gender Depiction in the Comedias of Calderón de la Barca
- URL: http://arxiv.org/abs/2411.03895v1
- Date: Wed, 06 Nov 2024 13:13:33 GMT
- Title: Computational Analysis of Gender Depiction in the Comedias of Calderón de la Barca
- Authors: Allison Keith, Antonio Rojas Castro, Sebastian Padó,
- Abstract summary: We develop methods to study gender depiction in the non-religious works (comedias) of Pedro Calder'on de la Barca.
We gather insights from a corpus of more than 100 plays by using a gender classifier and applying model explainability (attribution) methods.
We find that female and male characters are portrayed differently and can be identified by the gender prediction model at practically useful accuracies.
- Score: 6.978406757882009
- License:
- Abstract: In theatre, playwrights use the portrayal of characters to explore culturally based gender norms. In this paper, we develop quantitative methods to study gender depiction in the non-religious works (comedias) of Pedro Calder\'on de la Barca, a prolific Spanish 17th century author. We gather insights from a corpus of more than 100 plays by using a gender classifier and applying model explainability (attribution) methods to determine which text features are most influential in the model's decision to classify speech as 'male' or 'female', indicating the most gendered elements of dialogue in Calder\'on's comedias in a human accessible manner. We find that female and male characters are portrayed differently and can be identified by the gender prediction model at practically useful accuracies (up to f=0.83). Analysis reveals semantic aspects of gender portrayal, and demonstrates that the model is even useful in providing a relatively accurate scene-by-scene prediction of cross-dressing characters.
Related papers
- Reflecting the Male Gaze: Quantifying Female Objectification in 19th and 20th Century Novels [3.0623865942628594]
We propose a framework for analyzing gender bias in terms of female objectification.
Our framework measures female objectification along two axes.
Applying our framework to 19th and 20th century novels reveals evidence of female objectification.
arXiv Detail & Related papers (2024-03-25T20:16:14Z) - The Causal Influence of Grammatical Gender on Distributional Semantics [87.8027818528463]
How much meaning influences gender assignment across languages is an active area of research in linguistics and cognitive science.
We offer a novel, causal graphical model that jointly represents the interactions between a noun's grammatical gender, its meaning, and adjective choice.
When we control for the meaning of the noun, the relationship between grammatical gender and adjective choice is near zero and insignificant.
arXiv Detail & Related papers (2023-11-30T13:58:13Z) - Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts [87.62403265382734]
Recent studies show that traditional fairytales are rife with harmful gender biases.
This work aims to assess learned biases of language models by evaluating their robustness against gender perturbations.
arXiv Detail & Related papers (2023-10-16T22:25:09Z) - Analysing Gender Bias in Text-to-Image Models using Object Detection [0.0]
Using paired prompts that specify gender and vaguely reference an object we can examine whether certain objects are associated with a certain gender.
Male prompts generated objects such as ties, knives, trucks, baseball bats, and bicycles more frequently.
Female prompts were more likely to generate objects such as handbags, umbrellas, bowls, bottles, and cups.
arXiv Detail & Related papers (2023-07-16T12:31:29Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - A Moral- and Event- Centric Inspection of Gender Bias in Fairy Tales at
A Large Scale [50.92540580640479]
We computationally analyze gender bias in a fairy tale dataset containing 624 fairy tales from 7 different cultures.
We find that the number of male characters is two times that of female characters, showing a disproportionate gender representation.
Female characters turn out more associated with care-, loyalty- and sanctity- related moral words, while male characters are more associated with fairness- and authority- related moral words.
arXiv Detail & Related papers (2022-11-25T19:38:09Z) - Identifying gender bias in blockbuster movies through the lens of
machine learning [0.5023676240063351]
We gathered scripts of films from different genres and derived sentiments and emotions using natural language processing techniques.
We found specific patterns in male and female characters' personality traits in movies that align with societal stereotypes.
We used mathematical and machine learning techniques and found some biases wherein men are shown to be more dominant and envious than women.
arXiv Detail & Related papers (2022-11-21T09:41:53Z) - Wikigender: A Machine Learning Model to Detect Gender Bias in Wikipedia [0.0]
We use a machine learning model to prove that there is a difference in how women and men are portrayed on Wikipedia.
Using only adjectives as input to the model, we show that the adjectives used to portray women have a higher subjectivity than the ones used to describe men.
arXiv Detail & Related papers (2022-11-14T16:49:09Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.