Uncovering Implicit Gender Bias in Narratives through Commonsense
Inference
- URL: http://arxiv.org/abs/2109.06437v1
- Date: Tue, 14 Sep 2021 04:57:45 GMT
- Title: Uncovering Implicit Gender Bias in Narratives through Commonsense
Inference
- Authors: Tenghao Huang, Faeze Brahman, Vered Shwartz, Snigdha Chaturvedi
- Abstract summary: We study gender biases associated with the protagonist in model-generated stories.
We focus on implicit biases, and use a commonsense reasoning engine to uncover them.
- Score: 21.18458377708873
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-trained language models learn socially harmful biases from their training
corpora, and may repeat these biases when used for generation. We study gender
biases associated with the protagonist in model-generated stories. Such biases
may be expressed either explicitly ("women can't park") or implicitly (e.g. an
unsolicited male character guides her into a parking space). We focus on
implicit biases, and use a commonsense reasoning engine to uncover them.
Specifically, we infer and analyze the protagonist's motivations, attributes,
mental states, and implications on others. Our findings regarding implicit
biases are in line with prior work that studied explicit biases, for example
showing that female characters' portrayal is centered around appearance, while
male figures' focus on intellect.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Are Models Biased on Text without Gender-related Language? [14.931375031931386]
We introduce UnStereoEval (USE), a novel framework for investigating gender bias in stereotype-free scenarios.
USE defines a sentence-level score based on pretraining data statistics to determine if the sentence contain minimal word-gender associations.
We find low fairness across all 28 tested models, suggesting that bias does not solely stem from the presence of gender-related words.
arXiv Detail & Related papers (2024-05-01T15:51:15Z) - Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts [87.62403265382734]
Recent studies show that traditional fairytales are rife with harmful gender biases.
This work aims to assess learned biases of language models by evaluating their robustness against gender perturbations.
arXiv Detail & Related papers (2023-10-16T22:25:09Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Politeness Stereotypes and Attack Vectors: Gender Stereotypes in
Japanese and Korean Language Models [1.5039745292757671]
We study how grammatical gender bias relating to politeness levels manifests in Japanese and Korean language models.
We find that informal polite speech is most indicative of the female grammatical gender, while rude and formal speech is most indicative of the male grammatical gender.
We find politeness levels to be an attack vector for allocational gender bias in cyberbullying detection models.
arXiv Detail & Related papers (2023-06-16T10:36:18Z) - Are Fairy Tales Fair? Analyzing Gender Bias in Temporal Narrative Event
Chains of Children's Fairy Tales [46.65377334112404]
Social biases and stereotypes are embedded in our culture in part through their presence in our stories.
We propose a computational pipeline that automatically extracts a story's temporal narrative verb-based event chain for each of its characters.
We also present a verb-based event annotation scheme that can facilitate bias analysis by including categories such as those that align with traditional stereotypes.
arXiv Detail & Related papers (2023-05-26T05:29:37Z) - Model-Agnostic Gender Debiased Image Captioning [29.640940966944697]
Image captioning models are known to perpetuate and amplify harmful societal bias in the training set.
We propose a framework, called LIBRA, that learns from synthetically biased samples to decrease both types of biases.
arXiv Detail & Related papers (2023-04-07T15:30:49Z) - A Moral- and Event- Centric Inspection of Gender Bias in Fairy Tales at
A Large Scale [50.92540580640479]
We computationally analyze gender bias in a fairy tale dataset containing 624 fairy tales from 7 different cultures.
We find that the number of male characters is two times that of female characters, showing a disproportionate gender representation.
Female characters turn out more associated with care-, loyalty- and sanctity- related moral words, while male characters are more associated with fairness- and authority- related moral words.
arXiv Detail & Related papers (2022-11-25T19:38:09Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z) - Unsupervised Discovery of Implicit Gender Bias [38.59057512390926]
We take an unsupervised approach to identifying gender bias against women at a comment level.
Our main challenge is forcing the model to focus on signs of implicit bias, rather than other artifacts in the data.
arXiv Detail & Related papers (2020-04-17T17:36:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.