Quantifying How Hateful Communities Radicalize Online Users
- URL: http://arxiv.org/abs/2209.08697v1
- Date: Mon, 19 Sep 2022 01:13:29 GMT
- Title: Quantifying How Hateful Communities Radicalize Online Users
- Authors: Matheus Schmitz, Keith Burghardt, Goran Muric
- Abstract summary: We measure the impact of joining fringe hateful communities in terms of hate speech propagated to the rest of the social network.
We use data from Reddit to assess the effect of joining one type of echo chamber: a digital community of like-minded users exhibiting hateful behavior.
We show that the harmful speech does not remain contained within the community.
- Score: 2.378428291297535
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While online social media offers a way for ignored or stifled voices to be
heard, it also allows users a platform to spread hateful speech. Such speech
usually originates in fringe communities, yet it can spill over into mainstream
channels. In this paper, we measure the impact of joining fringe hateful
communities in terms of hate speech propagated to the rest of the social
network. We leverage data from Reddit to assess the effect of joining one type
of echo chamber: a digital community of like-minded users exhibiting hateful
behavior. We measure members' usage of hate speech outside the studied
community before and after they become active participants. Using Interrupted
Time Series (ITS) analysis as a causal inference method, we gauge the spillover
effect, in which hateful language from within a certain community can spread
outside that community by using the level of out-of-community hate word usage
as a proxy for learned hate. We investigate four different Reddit
sub-communities (subreddits) covering three areas of hate speech: racism,
misogyny and fat-shaming. In all three cases we find an increase in hate speech
outside the originating community, implying that joining such community leads
to a spread of hate speech throughout the platform. Moreover, users are found
to pick up this new hateful speech for months after initially joining the
community. We show that the harmful speech does not remain contained within the
community. Our results provide new evidence of the harmful effects of echo
chambers and the potential benefit of moderating them to reduce adoption of
hateful speech.
Related papers
- Hostile Counterspeech Drives Users From Hate Subreddits [1.5035331281822]
We analyze the effect of counterspeech on newcomers within hate subreddits on Reddit.
Non-hostile counterspeech is ineffective at keeping users from fully disengaging from these hate subreddits.
A single hostile counterspeech comment substantially reduces both future likelihood of engagement.
arXiv Detail & Related papers (2024-05-28T17:12:41Z) - Analyzing User Characteristics of Hate Speech Spreaders on Social Media [20.57872238271025]
We analyze the role of user characteristics in hate speech resharing across different types of hate speech.
We find that users with little social influence tend to share more hate speech.
Political anti-Trump and anti-right-wing hate is reshared by users with larger social influence.
arXiv Detail & Related papers (2023-10-24T12:17:48Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - On the rise of fear speech in online social media [7.090807766284268]
Fear speech, as the name suggests, attempts to incite fear about a target community.
This article presents a large-scale study to understand the prevalence of 400K fear speech and over 700K hate speech posts collected from Gab.com.
arXiv Detail & Related papers (2023-03-18T02:46:49Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Hatemongers ride on echo chambers to escalate hate speech diffusion [23.714548893849393]
We analyze more than 32 million posts from over 6.8 million users across three popular online social networks.
We find that hatemongers play a more crucial role in governing the spread of information compared to singled-out hateful content.
arXiv Detail & Related papers (2023-02-05T20:30:48Z) - Non-Polar Opposites: Analyzing the Relationship Between Echo Chambers
and Hostile Intergroup Interactions on Reddit [66.09950457847242]
We study the activity of 5.97M Reddit users and 421M comments posted over 13 years.
We create a typology of relationships between political communities based on whether their users are toxic to each other.
arXiv Detail & Related papers (2022-11-25T22:17:07Z) - Nipping in the Bud: Detection, Diffusion and Mitigation of Hate Speech
on Social Media [21.47216483704825]
This article presents methodological challenges that hinder building automated hate mitigation systems.
We discuss a series of our proposed solutions to limit the spread of hate speech on social media.
arXiv Detail & Related papers (2022-01-04T03:44:46Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.