The Peripatetic Hater: Predicting Movement Among Hate Subreddits
- URL: http://arxiv.org/abs/2405.17410v2
- Date: Wed, 20 Nov 2024 19:02:19 GMT
- Title: The Peripatetic Hater: Predicting Movement Among Hate Subreddits
- Authors: Daniel Hickey, Daniel M. T. Fessler, Kristina Lerman, Keith Burghardt,
- Abstract summary: We develop a new method to classify hate subreddits and the identities they disparage.
We find distinct clusters of subreddits targeting various identities, such as racist subreddits, xenophobic subreddits, and transphobic subreddits.
We show that users who join additional hate subreddits, especially those of a different category develop a wider hate group lexicon.
- Score: 1.7949335303516192
- License:
- Abstract: Many online hate groups exist to disparage others based on race, gender identity, sex, or other characteristics. The accessibility of these communities allows users to join multiple types of hate groups (e.g., a racist community and a misogynistic community), raising the question of whether users who join additional types of hate communities could be further radicalized compared to users who stay in one type of hate group. However, little is known about the dynamics of joining multiple types of hate groups, nor the effect of these groups on peripatetic users. We develop a new method to classify hate subreddits and the identities they disparage, then apply it to understand better how users come to join different types of hate subreddits. The hate classification technique utilizes human-validated deep learning models to extract the protected identities attacked, if any, across 168 subreddits. We find distinct clusters of subreddits targeting various identities, such as racist subreddits, xenophobic subreddits, and transphobic subreddits. We show that when users become active in their first hate subreddit, they have a high likelihood of becoming active in additional hate subreddits of a different category. We also find that users who join additional hate subreddits, especially those of a different category develop a wider hate group lexicon. These results then lead us to train a deep learning model that, as we demonstrate, usefully predicts the hate categories in which users will become active based on post text replied to and written. The accuracy of this model may be partly driven by peripatetic users often using the language of hate subreddits they eventually join. Overall, these results highlight the unique risks associated with hate communities on a social media platform, as discussion of alternative targets of hate may lead users to target more protected identities.
Related papers
- Analyzing User Characteristics of Hate Speech Spreaders on Social Media [20.57872238271025]
We analyze the role of user characteristics in hate speech resharing across different types of hate speech.
We find that users with little social influence tend to share more hate speech.
Political anti-Trump and anti-right-wing hate is reshared by users with larger social influence.
arXiv Detail & Related papers (2023-10-24T12:17:48Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - Hatemongers ride on echo chambers to escalate hate speech diffusion [23.714548893849393]
We analyze more than 32 million posts from over 6.8 million users across three popular online social networks.
We find that hatemongers play a more crucial role in governing the spread of information compared to singled-out hateful content.
arXiv Detail & Related papers (2023-02-05T20:30:48Z) - Non-Polar Opposites: Analyzing the Relationship Between Echo Chambers
and Hostile Intergroup Interactions on Reddit [66.09950457847242]
We study the activity of 5.97M Reddit users and 421M comments posted over 13 years.
We create a typology of relationships between political communities based on whether their users are toxic to each other.
arXiv Detail & Related papers (2022-11-25T22:17:07Z) - Spillover of Antisocial Behavior from Fringe Platforms: The Unintended
Consequences of Community Banning [0.2752817022620644]
We show that participating in fringe communities on Reddit increases users' toxicity and involvement with subreddits similar to the banned community.
In short, we find evidence for a spillover of antisocial behavior from fringe platforms onto Reddit via co-participation.
arXiv Detail & Related papers (2022-09-20T15:48:27Z) - Quantifying How Hateful Communities Radicalize Online Users [2.378428291297535]
We measure the impact of joining fringe hateful communities in terms of hate speech propagated to the rest of the social network.
We use data from Reddit to assess the effect of joining one type of echo chamber: a digital community of like-minded users exhibiting hateful behavior.
We show that the harmful speech does not remain contained within the community.
arXiv Detail & Related papers (2022-09-19T01:13:29Z) - Addressing the Challenges of Cross-Lingual Hate Speech Detection [115.1352779982269]
In this paper we focus on cross-lingual transfer learning to support hate speech detection in low-resource languages.
We leverage cross-lingual word embeddings to train our neural network systems on the source language and apply it to the target language.
We investigate the issue of label imbalance of hate speech datasets, since the high ratio of non-hate examples compared to hate examples often leads to low model performance.
arXiv Detail & Related papers (2022-01-15T20:48:14Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - When the Echo Chamber Shatters: Examining the Use of Community-Specific
Language Post-Subreddit Ban [4.884793296603604]
Community-level bans are a common tool against groups that enable online harassment and harmful speech.
Here, we provide a flexible unsupervised methodology to identify in-group language and track user activity on Reddit.
Top users were more likely to become less active overall, while random users often reduced use of in-group language without decreasing activity.
Users of dark humor communities were largely unaffected by bans while users of communities organized around white supremacy and fascism were the most affected.
arXiv Detail & Related papers (2021-06-30T16:59:46Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.