Understanding and Countering Stereotypes: A Computational Approach to
the Stereotype Content Model
- URL: http://arxiv.org/abs/2106.02596v1
- Date: Fri, 4 Jun 2021 16:53:37 GMT
- Title: Understanding and Countering Stereotypes: A Computational Approach to
the Stereotype Content Model
- Authors: Kathleen C. Fraser, Isar Nejadgholi, Svetlana Kiritchenko
- Abstract summary: We present a computational approach to interpreting stereotypes in text through the Stereotype Content Model (SCM)
The SCM proposes that stereotypes can be understood along two primary dimensions: warmth and competence.
It is known that countering stereotypes with anti-stereotypical examples is one of the most effective ways to reduce biased thinking.
- Score: 4.916009028580767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stereotypical language expresses widely-held beliefs about different social
categories. Many stereotypes are overtly negative, while others may appear
positive on the surface, but still lead to negative consequences. In this work,
we present a computational approach to interpreting stereotypes in text through
the Stereotype Content Model (SCM), a comprehensive causal theory from social
psychology. The SCM proposes that stereotypes can be understood along two
primary dimensions: warmth and competence. We present a method for defining
warmth and competence axes in semantic embedding space, and show that the four
quadrants defined by this subspace accurately represent the warmth and
competence concepts, according to annotated lexicons. We then apply our
computational SCM model to textual stereotype data and show that it compares
favourably with survey-based studies in the psychological literature.
Furthermore, we explore various strategies to counter stereotypical beliefs
with anti-stereotypes. It is known that countering stereotypes with
anti-stereotypical examples is one of the most effective ways to reduce biased
thinking, yet the problem of generating anti-stereotypes has not been
previously studied. Thus, a better understanding of how to generate realistic
and effective anti-stereotypes can contribute to addressing pressing societal
concerns of stereotyping, prejudice, and discrimination.
Related papers
- Spoken Stereoset: On Evaluating Social Bias Toward Speaker in Speech Large Language Models [50.40276881893513]
This study introduces Spoken Stereoset, a dataset specifically designed to evaluate social biases in Speech Large Language Models (SLLMs)
By examining how different models respond to speech from diverse demographic groups, we aim to identify these biases.
The findings indicate that while most models show minimal bias, some still exhibit slightly stereotypical or anti-stereotypical tendencies.
arXiv Detail & Related papers (2024-08-14T16:55:06Z) - Challenging Negative Gender Stereotypes: A Study on the Effectiveness of Automated Counter-Stereotypes [12.704072523930444]
This study investigates eleven strategies to automatically counter-act and challenge gender stereotypes in online communications.
We present AI-generated gender-based counter-stereotypes to study participants and ask them to assess their offensiveness, plausibility, and potential effectiveness.
arXiv Detail & Related papers (2024-04-18T01:48:28Z) - Stereotype Detection in LLMs: A Multiclass, Explainable, and Benchmark-Driven Approach [4.908389661988191]
This paper introduces the Multi-Grain Stereotype (MGS) dataset, consisting of 51,867 instances across gender, race, profession, religion, and other stereotypes.
We evaluate various machine learning approaches to establish baselines and fine-tune language models of different architectures and sizes.
We employ explainable AI (XAI) tools, including SHAP, LIME, and BertViz, to assess whether the model's learned patterns align with human intuitions about stereotypes.
arXiv Detail & Related papers (2024-04-02T09:31:32Z) - Self-Debiasing Large Language Models: Zero-Shot Recognition and
Reduction of Stereotypes [73.12947922129261]
We leverage the zero-shot capabilities of large language models to reduce stereotyping.
We show that self-debiasing can significantly reduce the degree of stereotyping across nine different social groups.
We hope this work opens inquiry into other zero-shot techniques for bias mitigation.
arXiv Detail & Related papers (2024-02-03T01:40:11Z) - Quantifying Stereotypes in Language [6.697298321551588]
We quantify stereotypes in language by annotating a dataset.
We use the pre-trained language models (PLMs) to learn this dataset to predict stereotypes of sentences.
We discuss stereotypes about common social issues such as hate speech, sexism, sentiments, and disadvantaged and advantaged groups.
arXiv Detail & Related papers (2024-01-28T01:07:21Z) - Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
Biases in Dialogue Systems [103.416202777731]
We study "persona biases", which we define to be the sensitivity of dialogue models' harmful behaviors contingent upon the personas they adopt.
We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement.
arXiv Detail & Related papers (2023-10-08T21:03:18Z) - Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale [61.555788332182395]
We investigate the potential for machine learning models to amplify dangerous and complex stereotypes.
We find a broad range of ordinary prompts produce stereotypes, including prompts simply mentioning traits, descriptors, occupations, or objects.
arXiv Detail & Related papers (2022-11-07T18:31:07Z) - CO-STAR: Conceptualisation of Stereotypes for Analysis and Reasoning [0.0]
We build on existing literature and present CO-STAR, a novel framework which encodes the underlying concepts of implied stereotypes.
We also introduce the CO-STAR training data set, which contains just over 12K structured annotations of implied stereotypes and stereotype conceptualisations.
The CO-STAR models are, however, limited in their ability to understand more complex and subtly worded stereotypes.
arXiv Detail & Related papers (2021-12-01T20:39:04Z) - Towards Understanding and Mitigating Social Biases in Language Models [107.82654101403264]
Large-scale pretrained language models (LMs) can be potentially dangerous in manifesting undesirable representational biases.
We propose steps towards mitigating social biases during text generation.
Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information.
arXiv Detail & Related papers (2021-06-24T17:52:43Z) - UnQovering Stereotyping Biases via Underspecified Questions [68.81749777034409]
We present UNQOVER, a framework to probe and quantify biases through underspecified questions.
We show that a naive use of model scores can lead to incorrect bias estimates due to two forms of reasoning errors.
We use this metric to analyze four important classes of stereotypes: gender, nationality, ethnicity, and religion.
arXiv Detail & Related papers (2020-10-06T01:49:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.