Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution
- URL: http://arxiv.org/abs/2403.03121v3
- Date: Tue, 28 May 2024 14:43:52 GMT
- Title: Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution
- Authors: Flor Miriam Plaza-del-Arco, Amanda Cercas Curry, Alba Curry, Gavin Abercrombie, Dirk Hovy,
- Abstract summary: We investigate whether emotions are gendered, and whether these variations are based on societal stereotypes.
We find that all models consistently exhibit gendered emotions, influenced by gender stereotypes.
Our study sheds light on the complex societal interplay between language, gender, and emotion.
- Score: 20.21748776472278
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and closed-source). We investigate whether emotions are gendered, and whether these variations are based on societal stereotypes. We prompt the models to adopt a gendered persona and attribute emotions to an event like 'When I had a serious argument with a dear person'. We then analyze the emotions generated by the models in relation to the gender-event pairs. We find that all models consistently exhibit gendered emotions, influenced by gender stereotypes. These findings are in line with established research in psychology and gender studies. Our study sheds light on the complex societal interplay between language, gender, and emotion. The reproduction of emotion stereotypes in LLMs allows us to use those models to study the topic in detail, but raises questions about the predictive use of those same LLMs for emotion applications.
Related papers
- Gender Bias in Decision-Making with Large Language Models: A Study of Relationship Conflicts [15.676219253088211]
We study gender equity within large language models (LLMs) through a decision-making lens.
We explore nine relationship configurations through name pairs across three name lists (men, women, neutral)
arXiv Detail & Related papers (2024-10-14T20:50:11Z) - An Empirical Study of Gendered Stereotypes in Emotional Attributes for Bangla in Multilingual Large Language Models [2.98683507969764]
Women are associated with emotions like empathy, fear, and guilt, while men are linked to anger, bravado, and authority.
We show the existence of gender bias in the context of emotions in Bangla through analytical methods.
All of our resources including code and data are made publicly available to support future research on Bangla NLP.
arXiv Detail & Related papers (2024-07-08T22:22:15Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - Socratis: Are large multimodal models emotionally aware? [63.912414283486555]
Existing emotion prediction benchmarks do not consider the diversity of emotions that an image and text can elicit in humans due to various reasons.
We propose Socratis, a societal reactions benchmark, where each image-caption (IC) pair is annotated with multiple emotions and the reasons for feeling them.
We benchmark the capability of state-of-the-art multimodal large language models to generate the reasons for feeling an emotion given an IC pair.
arXiv Detail & Related papers (2023-08-31T13:59:35Z) - Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench [83.41621219298489]
We evaluate Large Language Models' (LLMs) anthropomorphic capabilities using the emotion appraisal theory from psychology.
We collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study.
We conduct a human evaluation involving more than 1,200 subjects worldwide.
arXiv Detail & Related papers (2023-08-07T15:18:30Z) - Identifying gender bias in blockbuster movies through the lens of
machine learning [0.5023676240063351]
We gathered scripts of films from different genres and derived sentiments and emotions using natural language processing techniques.
We found specific patterns in male and female characters' personality traits in movies that align with societal stereotypes.
We used mathematical and machine learning techniques and found some biases wherein men are shown to be more dominant and envious than women.
arXiv Detail & Related papers (2022-11-21T09:41:53Z) - Gendered Mental Health Stigma in Masked Language Models [38.766854150355634]
We investigate gendered mental health stigma in masked language models.
We find that models are consistently more likely to predict female subjects than male in sentences about having a mental health condition.
arXiv Detail & Related papers (2022-10-27T03:09:46Z) - Speech Synthesis with Mixed Emotions [77.05097999561298]
We propose a novel formulation that measures the relative difference between the speech samples of different emotions.
We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.
At run-time, we control the model to produce the desired emotion mixture by manually defining an emotion attribute vector.
arXiv Detail & Related papers (2022-08-11T15:45:58Z) - MIME: MIMicking Emotions for Empathetic Response Generation [82.57304533143756]
Current approaches to empathetic response generation view the set of emotions expressed in the input text as a flat structure.
We argue that empathetic responses often mimic the emotion of the user to a varying degree, depending on its positivity or negativity and content.
arXiv Detail & Related papers (2020-10-04T00:35:47Z) - Language, communication and society: a gender based linguistics analysis [0.0]
The purpose of this study is to find evidence for supporting the hypothesis that language is the mirror of our thinking.
The answers have been analysed to see if gender stereotypes were present such as the attribution of psychological and behavioural characteristics.
arXiv Detail & Related papers (2020-07-14T08:38:37Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.