Impact of Stricter Content Moderation on Parler's Users' Discourse
- URL: http://arxiv.org/abs/2310.08844v1
- Date: Fri, 13 Oct 2023 04:09:39 GMT
- Title: Impact of Stricter Content Moderation on Parler's Users' Discourse
- Authors: Nihal Kumarswamy, Mohit Singhal, Shirin Nilizadeh
- Abstract summary: We studied the moderation changes performed by Parler and their effect on the toxicity of its content.
Our quasi-experimental time series analysis indicates that after the change in Parler's moderation, the severe forms of toxicity immediately decreased and sustained.
We found an increase in the factuality of the news sites being shared, as well as a decrease in the number of conspiracy or pseudoscience sources being shared.
- Score: 1.7863534204867277
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media platforms employ various content moderation techniques to remove
harmful, offensive, and hate speech content. The moderation level varies across
platforms; even over time, it can evolve in a platform. For example, Parler, a
fringe social media platform popular among conservative users, was known to
have the least restrictive moderation policies, claiming to have open
discussion spaces for their users. However, after linking the 2021 US Capitol
Riots and the activity of some groups on Parler, such as QAnon and Proud Boys,
on January 12, 2021, Parler was removed from the Apple and Google App Store and
suspended from Amazon Cloud hosting service. Parler would have to modify their
moderation policies to return to these online stores. After a month of
downtime, Parler was back online with a new set of user guidelines, which
reflected stricter content moderation, especially regarding the \emph{hate
speech} policy.
In this paper, we studied the moderation changes performed by Parler and
their effect on the toxicity of its content. We collected a large longitudinal
Parler dataset with 17M parleys from 432K active users from February 2021 to
January 2022, after its return to the Internet and App Store. To the best of
our knowledge, this is the first study investigating the effectiveness of
content moderation techniques using data-driven approaches and also the first
Parler dataset after its brief hiatus. Our quasi-experimental time series
analysis indicates that after the change in Parler's moderation, the severe
forms of toxicity (above a threshold of 0.5) immediately decreased and
sustained. In contrast, the trend did not change for less severe threats and
insults (a threshold between 0.5 - 0.7). Finally, we found an increase in the
factuality of the news sites being shared, as well as a decrease in the number
of conspiracy or pseudoscience sources being shared.
Related papers
- An Image is Worth a Thousand Toxic Words: A Metamorphic Testing
Framework for Content Moderation Software [64.367830425115]
Social media platforms are being increasingly misused to spread toxic content, including hate speech, malicious advertising, and pornography.
Despite tremendous efforts in developing and deploying content moderation methods, malicious users can evade moderation by embedding texts into images.
We propose a metamorphic testing framework for content moderation software.
arXiv Detail & Related papers (2023-08-18T20:33:06Z) - Twits, Toxic Tweets, and Tribal Tendencies: Trends in Politically Polarized Posts on Twitter [5.161088104035108]
We explore the role that partisanship and affective polarization play in contributing to toxicity on an individual level and a topic level on Twitter/X.
After collecting 89.6 million tweets from 43,151 Twitter/X users, we determine how several account-level characteristics, including partisanship, predict how often users post toxic content.
arXiv Detail & Related papers (2023-07-19T17:24:47Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - Going Extreme: Comparative Analysis of Hate Speech in Parler and Gab [2.487445341407889]
We provide the first large scale analysis of hate-speech on Parler.
In order to improve classification accuracy we annotated 10K Parler posts.
We find that hate mongers make 16.1% of Parler active users.
arXiv Detail & Related papers (2022-01-27T19:29:17Z) - Comparing the Language of QAnon-related content on Parler, Gab, and
Twitter [68.8204255655161]
Parler, a "free speech" platform popular with conservatives, was taken offline in January 2021 due to the lack of moderation of hateful and QAnon- and other conspiracy-related content.
We compare posts with the hashtag #QAnon on Parler over a month-long period with posts on Twitter and Gab.
Gab has the highest proportion of #QAnon posts with hate terms, and Parler and Twitter are similar in this respect.
On all three platforms, posts mentioning female political figures, Democrats, or Donald Trump have more anti-social language than posts mentioning male politicians, Republicans, or
arXiv Detail & Related papers (2021-11-22T11:19:15Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Do Platform Migrations Compromise Content Moderation? Evidence from
r/The_Donald and r/Incels [20.41491269475746]
We report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures.
Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform.
In spite of that, users in one of the studied communities showed increases in signals associated with toxicity and radicalization.
arXiv Detail & Related papers (2020-10-20T16:03:06Z) - Reading In-Between the Lines: An Analysis of Dissenter [2.2881898195409884]
We study Dissenter, a browser and web application that provides a conversational overlay for any web page.
In this work, we obtain a history of Dissenter comments, users, and the websites being discussed.
Our corpus consists of approximately 1.68M comments made by 101k users commenting on 588k distinct URLs.
arXiv Detail & Related papers (2020-09-03T16:25:28Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.