No Easy Way Out: the Effectiveness of Deplatforming an Extremist Forum to Suppress Hate and Harassment
- URL: http://arxiv.org/abs/2304.07037v7
- Date: Sat, 13 Apr 2024 22:45:12 GMT
- Title: No Easy Way Out: the Effectiveness of Deplatforming an Extremist Forum to Suppress Hate and Harassment
- Authors: Anh V. Vu, Alice Hutchings, Ross Anderson,
- Abstract summary: We show that deplatforming an active community to suppress online hate and harassment can be hard.
Case study is the disruption of the largest and longest-running harassment forum Kiwi Farms in late 2022.
- Score: 4.8185026703701705
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Legislators and policymakers worldwide are debating options for suppressing illegal, harmful and undesirable material online. Drawing on several quantitative data sources, we show that deplatforming an active community to suppress online hate and harassment, even with a substantial concerted effort involving several tech firms, can be hard. Our case study is the disruption of the largest and longest-running harassment forum Kiwi Farms in late 2022, which is probably the most extensive industry effort to date. Despite the active participation of a number of tech companies over several consecutive months, this campaign failed to shut down the forum and remove its objectionable content. While briefly raising public awareness, it led to rapid platform displacement and traffic fragmentation. Part of the activity decamped to Telegram, while traffic shifted from the primary domain to previously abandoned alternatives. The forum experienced intermittent outages for several weeks, after which the community leading the campaign lost interest, traffic was directed back to the main domain, users quickly returned, and the forum was back online and became even more connected. The forum members themselves stopped discussing the incident shortly thereafter, and the net effect was that forum activity, active users, threads, posts and traffic were all cut by about half. Deplatforming a community without a court order raises philosophical issues about censorship versus free speech; ethical and legal issues about the role of industry in online content moderation; and practical issues on the efficacy of private-sector versus government action. Deplatforming a dispersed community using a series of court orders against individual service providers appears unlikely to be very effective if the censor cannot incapacitate the key maintainers, whether by arresting them, enjoining them or otherwise deterring them.
Related papers
- Demarked: A Strategy for Enhanced Abusive Speech Moderation through Counterspeech, Detoxification, and Message Management [71.99446449877038]
We propose a more comprehensive approach called Demarcation scoring abusive speech based on four aspect -- (i) severity scale; (ii) presence of a target; (iii) context scale; (iv) legal scale.
Our work aims to inform future strategies for effectively addressing abusive speech online.
arXiv Detail & Related papers (2024-06-27T21:45:33Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - Online conspiracy communities are more resilient to deplatforming [2.9767849911461504]
We compare the shift in behavior of users affected by the ban of two large communities on Reddit, GreatAwakening and FatPeopleHate.
We estimate how many users migrate, finding that users in the conspiracy community are much more likely to leave Reddit altogether and join Voat.
Few migrating zealots drive the growth of the new GreatAwakening community on Voat, while this effect is absent for FatPeopleHate.
arXiv Detail & Related papers (2023-03-21T18:08:51Z) - Understanding Online Migration Decisions Following the Banning of
Radical Communities [0.2752817022620644]
We study how factors associated with the RECRO radicalization framework relate to users' migration decisions.
Our results show that individual-level factors, those relating to the behavior of users, are associated with the decision to post on the fringe platform.
arXiv Detail & Related papers (2022-12-09T10:43:15Z) - Nobody Wants to Work Anymore: An Analysis of r/antiwork and the
Interplay between Social and Mainstream Media during the Great Resignation [9.299167002524653]
r/antiwork is a Reddit community that focuses on the discussion of worker exploitation, labour rights and related left-wing political ideas.
In late 2021, r/antiwork became the fastest growing community on Reddit, coinciding with what the mainstream media began referring to as the Great Resignation.
We investigate how the r/antiwork community was affected by the exponential increase in subscribers and the media coverage that chronicled its rise.
arXiv Detail & Related papers (2022-10-14T13:27:14Z) - Quantifying How Hateful Communities Radicalize Online Users [2.378428291297535]
We measure the impact of joining fringe hateful communities in terms of hate speech propagated to the rest of the social network.
We use data from Reddit to assess the effect of joining one type of echo chamber: a digital community of like-minded users exhibiting hateful behavior.
We show that the harmful speech does not remain contained within the community.
arXiv Detail & Related papers (2022-09-19T01:13:29Z) - Nipping in the Bud: Detection, Diffusion and Mitigation of Hate Speech
on Social Media [21.47216483704825]
This article presents methodological challenges that hinder building automated hate mitigation systems.
We discuss a series of our proposed solutions to limit the spread of hate speech on social media.
arXiv Detail & Related papers (2022-01-04T03:44:46Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go [44.774035806004214]
harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
arXiv Detail & Related papers (2021-02-27T08:01:10Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z) - Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics [43.98568073610101]
We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
arXiv Detail & Related papers (2019-07-13T21:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.