Identifying gender bias in blockbuster movies through the lens of
  machine learning
        - URL: http://arxiv.org/abs/2211.12504v1
 - Date: Mon, 21 Nov 2022 09:41:53 GMT
 - Title: Identifying gender bias in blockbuster movies through the lens of
  machine learning
 - Authors: Muhammad Junaid Haris, Aanchal Upreti, Melih Kurtaran, Filip Ginter,
  Sebastien Lafond, Sepinoud Azimi
 - Abstract summary: We gathered scripts of films from different genres and derived sentiments and emotions using natural language processing techniques.
We found specific patterns in male and female characters' personality traits in movies that align with societal stereotypes.
We used mathematical and machine learning techniques and found some biases wherein men are shown to be more dominant and envious than women.
 - Score: 0.5023676240063351
 - License: http://creativecommons.org/licenses/by/4.0/
 - Abstract:   The problem of gender bias is highly prevalent and well known. In this paper,
we have analysed the portrayal of gender roles in English movies, a medium that
effectively influences society in shaping people's beliefs and opinions. First,
we gathered scripts of films from different genres and derived sentiments and
emotions using natural language processing techniques. Afterwards, we converted
the scripts into embeddings, i.e. a way of representing text in the form of
vectors. With a thorough investigation, we found specific patterns in male and
female characters' personality traits in movies that align with societal
stereotypes. Furthermore, we used mathematical and machine learning techniques
and found some biases wherein men are shown to be more dominant and envious
than women, whereas women have more joyful roles in movies. In our work, we
introduce, to the best of our knowledge, a novel technique to convert dialogues
into an array of emotions by combining it with Plutchik's wheel of emotions.
Our study aims to encourage reflections on gender equality in the domain of
film and facilitate other researchers in analysing movies automatically instead
of using manual approaches.
 
       
      
        Related papers
        - Gender Bias in Text-to-Video Generation Models: A case study of Sora [63.064204206220936]
This study investigates the presence of gender bias in OpenAI's Sora, a text-to-video generation model.
We uncover significant evidence of bias by analyzing the generated videos from a diverse set of gender-neutral and stereotypical prompts.
arXiv  Detail & Related papers  (2024-12-30T18:08:13Z) - Computational Analysis of Gender Depiction in the Comedias of Calderón   de la Barca [6.978406757882009]
We develop methods to study gender depiction in the non-religious works (comedias) of Pedro Calder'on de la Barca.
We gather insights from a corpus of more than 100 plays by using a gender classifier and applying model explainability (attribution) methods.
We find that female and male characters are portrayed differently and can be identified by the gender prediction model at practically useful accuracies.
arXiv  Detail & Related papers  (2024-11-06T13:13:33Z) - What an Elegant Bridge: Multilingual LLMs are Biased Similarly in   Different Languages [51.0349882045866]
This paper investigates biases of Large Language Models (LLMs) through the lens of grammatical gender.
We prompt a model to describe nouns with adjectives in various languages, focusing specifically on languages with grammatical gender.
We find that a simple classifier can not only predict noun gender above chance but also exhibit cross-language transferability.
arXiv  Detail & Related papers  (2024-07-12T22:10:16Z) - Multi-channel Emotion Analysis for Consensus Reaching in Group Movie   Recommendation Systems [0.0]
This paper proposes a novel approach to group movie suggestions by examining emotions from three different channels.
We employ the Jaccard similarity index to match each participant's emotional preferences to prospective movie choices.
The group's consensus level is calculated using a fuzzy inference system.
arXiv  Detail & Related papers  (2024-04-21T21:19:31Z) - Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes   in Emotion Attribution [20.21748776472278]
We investigate whether emotions are gendered, and whether these variations are based on societal stereotypes.
We find that all models consistently exhibit gendered emotions, influenced by gender stereotypes.
Our study sheds light on the complex societal interplay between language, gender, and emotion.
arXiv  Detail & Related papers  (2024-03-05T17:04:05Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
  resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv  Detail & Related papers  (2023-06-21T17:59:51Z) - A Moral- and Event- Centric Inspection of Gender Bias in Fairy Tales at
  A Large Scale [50.92540580640479]
We computationally analyze gender bias in a fairy tale dataset containing 624 fairy tales from 7 different cultures.
We find that the number of male characters is two times that of female characters, showing a disproportionate gender representation.
Female characters turn out more associated with care-, loyalty- and sanctity- related moral words, while male characters are more associated with fairness- and authority- related moral words.
arXiv  Detail & Related papers  (2022-11-25T19:38:09Z) - Uncovering Implicit Gender Bias in Narratives through Commonsense
  Inference [21.18458377708873]
We study gender biases associated with the protagonist in model-generated stories.
We focus on implicit biases, and use a commonsense reasoning engine to uncover them.
arXiv  Detail & Related papers  (2021-09-14T04:57:45Z) - PowerTransformer: Unsupervised Controllable Revision for Biased Language
  Correction [62.46299488465803]
We formulate a new revision task that aims to rewrite a given text to correct the implicit and potentially undesirable bias in character portrayals.
We introduce PowerTransformer as an approach that debiases text through the lens of connotation frames.
We demonstrate that our approach outperforms ablations and existing methods from related tasks.
arXiv  Detail & Related papers  (2020-10-26T18:05:48Z) - Computational appraisal of gender representativeness in popular movies [0.0]
This article illustrates how automated computational methods may be used to scale up such empirical observations.
We specifically apply a face and gender detection algorithm on a broad set of popular movies spanning more than three decades to carry out a large-scale appraisal of the on-screen presence of women and men.
arXiv  Detail & Related papers  (2020-09-16T13:15:11Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv  Detail & Related papers  (2020-05-01T21:23:20Z) - Measuring Female Representation and Impact in Films over Time [78.5821575986965]
Women have always been underrepresented in movies and not until recently has the representation of women in movies improved.
We propose a new measure, the female cast ratio, and compare it to the commonly used Bechdel test result.
arXiv  Detail & Related papers  (2020-01-10T15:29:18Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.