When the Echo Chamber Shatters: Examining the Use of Community-Specific
Language Post-Subreddit Ban
- URL: http://arxiv.org/abs/2106.16207v1
- Date: Wed, 30 Jun 2021 16:59:46 GMT
- Title: When the Echo Chamber Shatters: Examining the Use of Community-Specific
Language Post-Subreddit Ban
- Authors: Milo Z. Trujillo, Samuel F. Rosenblatt, Guillermo de Anda J\'auregui,
Emily Moog, Briane Paul V. Samson, Laurent H\'ebert-Dufresne and Allison M.
Roth
- Abstract summary: Community-level bans are a common tool against groups that enable online harassment and harmful speech.
Here, we provide a flexible unsupervised methodology to identify in-group language and track user activity on Reddit.
Top users were more likely to become less active overall, while random users often reduced use of in-group language without decreasing activity.
Users of dark humor communities were largely unaffected by bans while users of communities organized around white supremacy and fascism were the most affected.
- Score: 4.884793296603604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Community-level bans are a common tool against groups that enable online
harassment and harmful speech. Unfortunately, the efficacy of community bans
has only been partially studied and with mixed results. Here, we provide a
flexible unsupervised methodology to identify in-group language and track user
activity on Reddit both before and after the ban of a community (subreddit). We
use a simple word frequency divergence to identify uncommon words
overrepresented in a given community, not as a proxy for harmful speech but as
a linguistic signature of the community. We apply our method to 15 banned
subreddits, and find that community response is heterogeneous between
subreddits and between users of a subreddit. Top users were more likely to
become less active overall, while random users often reduced use of in-group
language without decreasing activity. Finally, we find some evidence that the
effectiveness of bans aligns with the content of a community. Users of dark
humor communities were largely unaffected by bans while users of communities
organized around white supremacy and fascism were the most affected.
Altogether, our results show that bans do not affect all groups or users
equally, and pave the way to understanding the effect of bans across
communities.
Related papers
- Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - Online conspiracy communities are more resilient to deplatforming [2.9767849911461504]
We compare the shift in behavior of users affected by the ban of two large communities on Reddit, GreatAwakening and FatPeopleHate.
We estimate how many users migrate, finding that users in the conspiracy community are much more likely to leave Reddit altogether and join Voat.
Few migrating zealots drive the growth of the new GreatAwakening community on Voat, while this effect is absent for FatPeopleHate.
arXiv Detail & Related papers (2023-03-21T18:08:51Z) - Non-Polar Opposites: Analyzing the Relationship Between Echo Chambers
and Hostile Intergroup Interactions on Reddit [66.09950457847242]
We study the activity of 5.97M Reddit users and 421M comments posted over 13 years.
We create a typology of relationships between political communities based on whether their users are toxic to each other.
arXiv Detail & Related papers (2022-11-25T22:17:07Z) - Spillover of Antisocial Behavior from Fringe Platforms: The Unintended
Consequences of Community Banning [0.2752817022620644]
We show that participating in fringe communities on Reddit increases users' toxicity and involvement with subreddits similar to the banned community.
In short, we find evidence for a spillover of antisocial behavior from fringe platforms onto Reddit via co-participation.
arXiv Detail & Related papers (2022-09-20T15:48:27Z) - Quantifying How Hateful Communities Radicalize Online Users [2.378428291297535]
We measure the impact of joining fringe hateful communities in terms of hate speech propagated to the rest of the social network.
We use data from Reddit to assess the effect of joining one type of echo chamber: a digital community of like-minded users exhibiting hateful behavior.
We show that the harmful speech does not remain contained within the community.
arXiv Detail & Related papers (2022-09-19T01:13:29Z) - Beyond Plain Toxic: Detection of Inappropriate Statements on Flammable
Topics for the Russian Language [76.58220021791955]
We present two text collections labelled according to binary notion of inapropriateness and a multinomial notion of sensitive topic.
To objectivise the notion of inappropriateness, we define it in a data-driven way though crowdsourcing.
arXiv Detail & Related papers (2022-03-04T15:59:06Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Do Platform Migrations Compromise Content Moderation? Evidence from
r/The_Donald and r/Incels [20.41491269475746]
We report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures.
Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform.
In spite of that, users in one of the studied communities showed increases in signals associated with toxicity and radicalization.
arXiv Detail & Related papers (2020-10-20T16:03:06Z) - Breaking the Communities: Characterizing community changing users using
text mining and graph machine learning on Twitter [0.0]
We study users who break their community on Twitter using natural language processing techniques and graph machine learning algorithms.
We collected 9 million Twitter messages from 1.5 million users and constructed the retweet networks.
We present a machine learning framework for social media users classification which detects "community breakers"
arXiv Detail & Related papers (2020-08-24T23:44:51Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.