Multi-dimensional Racism Classification during COVID-19: Stigmatization,
Offensiveness, Blame, and Exclusion
- URL: http://arxiv.org/abs/2208.13318v1
- Date: Mon, 29 Aug 2022 00:38:56 GMT
- Title: Multi-dimensional Racism Classification during COVID-19: Stigmatization,
Offensiveness, Blame, and Exclusion
- Authors: Xin Pei, Deval Mehta
- Abstract summary: We develop a multi-dimensional model for racism detection, namely stigmatization, offensiveness, blame, and exclusion.
This categorical detection enables insights into the underlying subtlety of racist discussion on digital platforms during COVID-19.
- Score: 1.0878040851638
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transcending the binary categorization of racist texts, our study takes cues
from social science theories to develop a multi-dimensional model for racism
detection, namely stigmatization, offensiveness, blame, and exclusion. With the
aid of BERT and topic modeling, this categorical detection enables insights
into the underlying subtlety of racist discussion on digital platforms during
COVID-19. Our study contributes to enriching the scholarly discussion on
deviant racist behaviours on social media. First, a stage-wise analysis is
applied to capture the dynamics of the topic changes across the early stages of
COVID-19 which transformed from a domestic epidemic to an international public
health emergency and later to a global pandemic. Furthermore, mapping this
trend enables a more accurate prediction of public opinion evolvement
concerning racism in the offline world, and meanwhile, the enactment of
specified intervention strategies to combat the upsurge of racism during the
global public health crisis like COVID-19. In addition, this interdisciplinary
research also points out a direction for future studies on social network
analysis and mining. Integration of social science perspectives into the
development of computational methods provides insights into more accurate data
detection and analytics.
Related papers
- Community Shaping in the Digital Age: A Temporal Fusion Framework for Analyzing Discourse Fragmentation in Online Social Networks [45.58331196717468]
This research presents a framework for analyzing the dynamics of online communities in social media platforms.
By combining text classification and dynamic social network analysis, we uncover mechanisms driving community formation and evolution.
arXiv Detail & Related papers (2024-09-18T03:03:02Z) - A longitudinal sentiment analysis of Sinophobia during COVID-19 using large language models [3.3741245091336083]
The COVID-19 pandemic has exacerbated xenophobia, particularly Sinophobia, leading to widespread discrimination against individuals of Chinese descent.
We present a sentiment analysis framework utilising LLMs for longitudinal sentiment analysis of the Sinophobic sentiments expressed in X (Twitter) during the COVID-19 pandemic.
The results show a significant correlation between the spikes in Sinophobic tweets, Sinophobic sentiments and surges in COVID-19 cases, revealing that the evolution of the pandemic influenced public sentiment and the prevalence of Sinophobic discourse.
arXiv Detail & Related papers (2024-08-29T23:39:11Z) - The Factuality Tax of Diversity-Intervened Text-to-Image Generation: Benchmark and Fact-Augmented Intervention [61.80236015147771]
We quantify the trade-off between using diversity interventions and preserving demographic factuality in T2I models.
Experiments on DoFaiR reveal that diversity-oriented instructions increase the number of different gender and racial groups.
We propose Fact-Augmented Intervention (FAI) to reflect on verbalized or retrieved factual information about gender and racial compositions of generation subjects in history.
arXiv Detail & Related papers (2024-06-29T09:09:42Z) - The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Pre-trained Language Models [78.69526166193236]
Pre-trained Language models (PLMs) have been acknowledged to contain harmful information, such as social biases.
We propose sc Social Bias Neurons to accurately pinpoint units (i.e., neurons) in a language model that can be attributed to undesirable behavior, such as social bias.
As measured by prior metrics from StereoSet, our model achieves a higher degree of fairness while maintaining language modeling ability with low cost.
arXiv Detail & Related papers (2024-06-14T15:41:06Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Aggression and "hate speech" in communication of media users: analysis
of control capabilities [50.591267188664666]
Authors studied the possibilities of mutual influence of users in new media.
They found a high level of aggression and hate speech when discussing an urgent social problem - measures for COVID-19 fighting.
Results can be useful for developing media content in a modern digital environment.
arXiv Detail & Related papers (2022-08-25T15:53:32Z) - Beyond a binary of (non)racist tweets: A four-dimensional categorical
detection and analysis of racist and xenophobic opinions on Twitter in early
Covid-19 [0.0]
This research develops a four dimensional category for racism and xenophobia detection, namely stigmatization, offensiveness, blame, and exclusion.
With the aid of deep learning techniques, this categorical detection enables insights into the nuances of emergent topics reflected in racist and xenophobic expression on Twitter.
arXiv Detail & Related papers (2021-07-18T02:37:31Z) - Towards Understanding and Mitigating Social Biases in Language Models [107.82654101403264]
Large-scale pretrained language models (LMs) can be potentially dangerous in manifesting undesirable representational biases.
We propose steps towards mitigating social biases during text generation.
Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information.
arXiv Detail & Related papers (2021-06-24T17:52:43Z) - On Analyzing Antisocial Behaviors Amid COVID-19 Pandemic [5.900114841365645]
Despite the gravity of the issue, very few studies have studied online antisocial behaviors amid the COVID-19 pandemic.
In this paper, we fill the research gap by collecting and annotating a large dataset of over 40 million COVID-19 related tweets.
We also conduct an empirical analysis of our annotated dataset and found that new abusive lexicons are introduced amid the COVID-19 pandemic.
arXiv Detail & Related papers (2020-07-21T11:11:35Z) - #Coronavirus or #Chinesevirus?!: Understanding the negative sentiment
reflected in Tweets with racist hashtags across the development of COVID-19 [1.0878040851638]
We focus on the analysis of negative sentiment reflected in tweets marked with racist hashtags.
We propose a stage-based approach to capture how the negative sentiment changes along with the three development stages of COVID-19.
arXiv Detail & Related papers (2020-05-17T11:15:50Z) - Detecting East Asian Prejudice on Social Media [10.647940201343575]
We report on the creation of a classifier that detects and categorizes social media posts from Twitter into four classes: Hostility against East Asia, Criticism of East Asia, Meta-discussions of East Asian prejudice and a neutral class.
arXiv Detail & Related papers (2020-05-08T08:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.