The Great Ban: Efficacy and Unintended Consequences of a Massive Deplatforming Operation on Reddit
- URL: http://arxiv.org/abs/2401.11254v5
- Date: Tue, 28 May 2024 09:50:47 GMT
- Title: The Great Ban: Efficacy and Unintended Consequences of a Massive Deplatforming Operation on Reddit
- Authors: Lorenzo Cima, Amaury Trujillo, Marco Avvenuti, Stefano Cresci,
- Abstract summary: We assess the effectiveness of The Great Ban, a massive deplatforming operation that affected nearly 2,000 communities on Reddit.
By analyzing 16M comments posted by 17K users during 14 months, we provide nuanced results on the effects, both desired and otherwise.
- Score: 0.7422344184734279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the current landscape of online abuses and harms, effective content moderation is necessary to cultivate safe and inclusive online spaces. Yet, the effectiveness of many moderation interventions is still unclear. Here, we assess the effectiveness of The Great Ban, a massive deplatforming operation that affected nearly 2,000 communities on Reddit. By analyzing 16M comments posted by 17K users during 14 months, we provide nuanced results on the effects, both desired and otherwise, of the ban. Among our main findings is that 15.6% of the affected users left Reddit and that those who remained reduced their toxicity by 6.6% on average. The ban also caused 5% users to increase their toxicity by more than 70% of their pre-ban level. Overall, our multifaceted results provide new insights into the efficacy of deplatforming. As such, our findings can inform the development of future moderation interventions and the policing of online platforms.
Related papers
- Taming Toxicity or Fueling It? The Great Ban`s Role in Shifting Toxic User Behavior and Engagement [0.6918368994425961]
We evaluate the effectiveness of The Great Ban, one of the largest deplatforming interventions carried out by Reddit.
We analyzed 53M comments shared by nearly 34K users.
We found that 15.6% of the moderated users abandoned the platform while the remaining ones decreased their overall toxicity by 4.1%.
arXiv Detail & Related papers (2024-11-06T16:34:59Z) - MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Deplatforming Norm-Violating Influencers on Social Media Reduces Overall
Online Attention Toward Them [11.958455966181807]
We study 165 deplatforming events targeted at 101 influencers on Reddit.
We find that deplatforming reduces online attention toward influencers.
This work contributes to the ongoing effort to map the effectiveness of content moderation interventions.
arXiv Detail & Related papers (2024-01-02T15:40:35Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - A Comprehensive Picture of Factors Affecting User Willingness to Use
Mobile Health Applications [62.60524178293434]
The aim of this paper is to investigate the factors that influence user acceptance of mHealth apps.
Users' digital literacy has the strongest impact on their willingness to use them, followed by their online habit of sharing personal information.
Users' demographic background, such as their country of residence, age, ethnicity, and education, has a significant moderating effect.
arXiv Detail & Related papers (2023-05-10T08:11:21Z) - One of Many: Assessing User-level Effects of Moderation Interventions on
r/The_Donald [1.1041211464412573]
We evaluate the user level effects of the sequence of moderation interventions that targeted r/The_Donald on Reddit.
We find that interventions having strong community level effects also cause extreme and diversified user level reactions.
Our results highlight that platform and community level effects are not always representative of the underlying behavior of individuals or smaller user groups.
arXiv Detail & Related papers (2022-09-19T07:46:18Z) - Make Reddit Great Again: Assessing Community Effects of Moderation
Interventions on r/The_Donald [1.1041211464412573]
r/The_Donald was repeatedly denounced as a toxic and misbehaving online community.
It was quarantined in June 2019, restricted in February 2020, and finally banned in June 2020, but the effects of this sequence of interventions are still unclear.
We find that the interventions greatly reduced the activity of problematic users.
However, the interventions also caused an increase in toxicity and led users to share more polarized and less factual news.
arXiv Detail & Related papers (2022-01-17T15:09:51Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go [44.774035806004214]
harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
arXiv Detail & Related papers (2021-02-27T08:01:10Z) - Do Platform Migrations Compromise Content Moderation? Evidence from
r/The_Donald and r/Incels [20.41491269475746]
We report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures.
Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform.
In spite of that, users in one of the studied communities showed increases in signals associated with toxicity and radicalization.
arXiv Detail & Related papers (2020-10-20T16:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.