Challenging Negative Gender Stereotypes: A Study on the Effectiveness of Automated Counter-Stereotypes
- URL: http://arxiv.org/abs/2404.11845v1
- Date: Thu, 18 Apr 2024 01:48:28 GMT
- Title: Challenging Negative Gender Stereotypes: A Study on the Effectiveness of Automated Counter-Stereotypes
- Authors: Isar Nejadgholi, Kathleen C. Fraser, Anna Kerkhof, Svetlana Kiritchenko,
- Abstract summary: This study investigates eleven strategies to automatically counter-act and challenge gender stereotypes in online communications.
We present AI-generated gender-based counter-stereotypes to study participants and ask them to assess their offensiveness, plausibility, and potential effectiveness.
- Score: 12.704072523930444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gender stereotypes are pervasive beliefs about individuals based on their gender that play a significant role in shaping societal attitudes, behaviours, and even opportunities. Recognizing the negative implications of gender stereotypes, particularly in online communications, this study investigates eleven strategies to automatically counter-act and challenge these views. We present AI-generated gender-based counter-stereotypes to (self-identified) male and female study participants and ask them to assess their offensiveness, plausibility, and potential effectiveness. The strategies of counter-facts and broadening universals (i.e., stating that anyone can have a trait regardless of group membership) emerged as the most robust approaches, while humour, perspective-taking, counter-examples, and empathy for the speaker were perceived as less effective. Also, the differences in ratings were more pronounced for stereotypes about the different targets than between the genders of the raters. Alarmingly, many AI-generated counter-stereotypes were perceived as offensive and/or implausible. Our analysis and the collected dataset offer foundational insight into counter-stereotype generation, guiding future efforts to develop strategies that effectively challenge gender stereotypes in online interactions.
Related papers
- Revisiting The Classics: A Study on Identifying and Rectifying Gender Stereotypes in Rhymes and Poems [0.0]
The work contributes by gathering a dataset of rhymes and poems to identify gender stereotypes and propose a model with 97% accuracy to identify gender bias.
Gender stereotypes were rectified using a Large Language Model (LLM) and its effectiveness was evaluated in a comparative survey against human educator rectifications.
arXiv Detail & Related papers (2024-03-18T13:02:02Z) - The Male CEO and the Female Assistant: Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework to systematically evaluate T2I models in dual-subject generation setting.
PST is a dual-subject generation task, i.e. generating two people in the same image.
We show that despite generating seemingly fair or even anti-stereotype single-person images, DALLE-3 still shows notable biases under PST.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - Stereotypes and Smut: The (Mis)representation of Non-cisgender
Identities by Text-to-Image Models [6.92043136971035]
We investigate how multimodal models handle diverse gender identities.
We find certain non-cisgender identities are consistently (mis)represented as less human, more stereotyped and more sexualised.
These improvements could pave the way for a future where change is led by the affected community.
arXiv Detail & Related papers (2023-05-26T16:28:49Z) - Gender Biases in Automatic Evaluation Metrics for Image Captioning [87.15170977240643]
We conduct a systematic study of gender biases in model-based evaluation metrics for image captioning tasks.
We demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations.
We present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments.
arXiv Detail & Related papers (2023-05-24T04:27:40Z) - Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale [61.555788332182395]
We investigate the potential for machine learning models to amplify dangerous and complex stereotypes.
We find a broad range of ordinary prompts produce stereotypes, including prompts simply mentioning traits, descriptors, occupations, or objects.
arXiv Detail & Related papers (2022-11-07T18:31:07Z) - Deep Generative Views to Mitigate Gender Classification Bias Across
Gender-Race Groups [0.8594140167290097]
We propose a bias mitigation strategy to improve classification accuracy and reduce bias across gender-racial groups.
We leverage the power of generative views, structured learning, and evidential learning towards mitigating gender classification bias.
arXiv Detail & Related papers (2022-08-17T16:23:35Z) - Towards Understanding and Mitigating Social Biases in Language Models [107.82654101403264]
Large-scale pretrained language models (LMs) can be potentially dangerous in manifesting undesirable representational biases.
We propose steps towards mitigating social biases during text generation.
Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information.
arXiv Detail & Related papers (2021-06-24T17:52:43Z) - Understanding and Countering Stereotypes: A Computational Approach to
the Stereotype Content Model [4.916009028580767]
We present a computational approach to interpreting stereotypes in text through the Stereotype Content Model (SCM)
The SCM proposes that stereotypes can be understood along two primary dimensions: warmth and competence.
It is known that countering stereotypes with anti-stereotypical examples is one of the most effective ways to reduce biased thinking.
arXiv Detail & Related papers (2021-06-04T16:53:37Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z) - Can gender inequality be created without inter-group discrimination? [0.0]
We test whether a simple agent-based dynamic process could create gender inequality.
We simulate a population who interact in pairs of randomly selected agents to influence each other about their esteem judgments of self and others.
Without prejudice, stereotypes, segregation, or categorization, our model produces inter-group inequality of self-esteem and status that is stable, consensual, and exhibits characteristics of glass ceiling effects.
arXiv Detail & Related papers (2020-05-05T07:33:27Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.