Do Platform Migrations Compromise Content Moderation? Evidence from
r/The_Donald and r/Incels
- URL: http://arxiv.org/abs/2010.10397v3
- Date: Fri, 20 Aug 2021 12:02:10 GMT
- Title: Do Platform Migrations Compromise Content Moderation? Evidence from
r/The_Donald and r/Incels
- Authors: Manoel Horta Ribeiro, Shagun Jhaver, Savvas Zannettou, Jeremy
Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, Robert West
- Abstract summary: We report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures.
Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform.
In spite of that, users in one of the studied communities showed increases in signals associated with toxicity and radicalization.
- Score: 20.41491269475746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When toxic online communities on mainstream platforms face moderation
measures, such as bans, they may migrate to other platforms with laxer policies
or set up their own dedicated websites. Previous work suggests that within
mainstream platforms, community-level moderation is effective in mitigating the
harm caused by the moderated communities. It is, however, unclear whether these
results also hold when considering the broader Web ecosystem. Do toxic
communities continue to grow in terms of their user base and activity on the
new platforms? Do their members become more toxic and ideologically
radicalized? In this paper, we report the results of a large-scale
observational study of how problematic online communities progress following
community-level moderation measures. We analyze data from r/The_Donald and
r/Incels, two communities that were banned from Reddit and subsequently
migrated to their own standalone websites. Our results suggest that, in both
cases, moderation measures significantly decreased posting activity on the new
platform, reducing the number of posts, active users, and newcomers. In spite
of that, users in one of the studied communities (r/The_Donald) showed
increases in signals associated with toxicity and radicalization, which
justifies concerns that the reduction in activity may come at the expense of a
more toxic and radical community. Overall, our results paint a nuanced portrait
of the consequences of community-level moderation and can inform their design
and deployment.
Related papers
- Taming Toxicity or Fueling It? The Great Ban`s Role in Shifting Toxic User Behavior and Engagement [0.6918368994425961]
We evaluate the effectiveness of The Great Ban, one of the largest deplatforming interventions carried out by Reddit.
We analyzed 53M comments shared by nearly 34K users.
We found that 15.6% of the moderated users abandoned the platform while the remaining ones decreased their overall toxicity by 4.1%.
arXiv Detail & Related papers (2024-11-06T16:34:59Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - Online conspiracy communities are more resilient to deplatforming [2.9767849911461504]
We compare the shift in behavior of users affected by the ban of two large communities on Reddit, GreatAwakening and FatPeopleHate.
We estimate how many users migrate, finding that users in the conspiracy community are much more likely to leave Reddit altogether and join Voat.
Few migrating zealots drive the growth of the new GreatAwakening community on Voat, while this effect is absent for FatPeopleHate.
arXiv Detail & Related papers (2023-03-21T18:08:51Z) - Understanding Online Migration Decisions Following the Banning of
Radical Communities [0.2752817022620644]
We study how factors associated with the RECRO radicalization framework relate to users' migration decisions.
Our results show that individual-level factors, those relating to the behavior of users, are associated with the decision to post on the fringe platform.
arXiv Detail & Related papers (2022-12-09T10:43:15Z) - Non-Polar Opposites: Analyzing the Relationship Between Echo Chambers
and Hostile Intergroup Interactions on Reddit [66.09950457847242]
We study the activity of 5.97M Reddit users and 421M comments posted over 13 years.
We create a typology of relationships between political communities based on whether their users are toxic to each other.
arXiv Detail & Related papers (2022-11-25T22:17:07Z) - Spillover of Antisocial Behavior from Fringe Platforms: The Unintended
Consequences of Community Banning [0.2752817022620644]
We show that participating in fringe communities on Reddit increases users' toxicity and involvement with subreddits similar to the banned community.
In short, we find evidence for a spillover of antisocial behavior from fringe platforms onto Reddit via co-participation.
arXiv Detail & Related papers (2022-09-20T15:48:27Z) - One of Many: Assessing User-level Effects of Moderation Interventions on
r/The_Donald [1.1041211464412573]
We evaluate the user level effects of the sequence of moderation interventions that targeted r/The_Donald on Reddit.
We find that interventions having strong community level effects also cause extreme and diversified user level reactions.
Our results highlight that platform and community level effects are not always representative of the underlying behavior of individuals or smaller user groups.
arXiv Detail & Related papers (2022-09-19T07:46:18Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.