Deplatforming Norm-Violating Influencers on Social Media Reduces Overall
Online Attention Toward Them
- URL: http://arxiv.org/abs/2401.01253v1
- Date: Tue, 2 Jan 2024 15:40:35 GMT
- Title: Deplatforming Norm-Violating Influencers on Social Media Reduces Overall
Online Attention Toward Them
- Authors: Manoel Horta Ribeiro, Shagun Jhaver, Jordi Cluet i Martinell, Marie
Reignier-Tayar, Robert West
- Abstract summary: We study 165 deplatforming events targeted at 101 influencers on Reddit.
We find that deplatforming reduces online attention toward influencers.
This work contributes to the ongoing effort to map the effectiveness of content moderation interventions.
- Score: 11.958455966181807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: From politicians to podcast hosts, online platforms have systematically
banned (``deplatformed'') influential users for breaking platform guidelines.
Previous inquiries on the effectiveness of this intervention are inconclusive
because 1) they consider only few deplatforming events; 2) they consider only
overt engagement traces (e.g., likes and posts) but not passive engagement
(e.g., views); 3) they do not consider all the potential places users impacted
by the deplatforming event might migrate to. We address these limitations in a
longitudinal, quasi-experimental study of 165 deplatforming events targeted at
101 influencers. We collect deplatforming events from Reddit posts and then
manually curate the data, ensuring the correctness of a large dataset of
deplatforming events. Then, we link these events to Google Trends and Wikipedia
page views, platform-agnostic measures of online attention that capture the
general public's interest in specific influencers. Through a
difference-in-differences approach, we find that deplatforming reduces online
attention toward influencers. After 12 months, we estimate that online
attention toward deplatformed influencers is reduced by -63% (95% CI
[-75%,-46%]) on Google and by -43% (95% CI [-57%,-24%]) on Wikipedia. Further,
as we study over a hundred deplatforming events, we can analyze in which cases
deplatforming is more or less impactful, revealing nuances about the
intervention. Notably, we find that both permanent and temporary deplatforming
reduce online attention toward influencers; Overall, this work contributes to
the ongoing effort to map the effectiveness of content moderation
interventions, driving platform governance away from speculation.
Related papers
- On the Use of Proxies in Political Ad Targeting [49.61009579554272]
We show that major political advertisers circumvented mitigations by targeting proxy attributes.
Our findings have crucial implications for the ongoing discussion on the regulation of political advertising.
arXiv Detail & Related papers (2024-10-18T17:15:13Z) - The Great Ban: Efficacy and Unintended Consequences of a Massive Deplatforming Operation on Reddit [0.7422344184734279]
We assess the effectiveness of The Great Ban, a massive deplatforming operation that affected nearly 2,000 communities on Reddit.
By analyzing 16M comments posted by 17K users during 14 months, we provide nuanced results on the effects, both desired and otherwise.
arXiv Detail & Related papers (2024-01-20T15:21:37Z) - Understanding Online Migration Decisions Following the Banning of
Radical Communities [0.2752817022620644]
We study how factors associated with the RECRO radicalization framework relate to users' migration decisions.
Our results show that individual-level factors, those relating to the behavior of users, are associated with the decision to post on the fringe platform.
arXiv Detail & Related papers (2022-12-09T10:43:15Z) - Competition, Alignment, and Equilibria in Digital Marketplaces [97.03797129675951]
We study a duopoly market where platform actions are bandit algorithms and the two platforms compete for user participation.
Our main finding is that competition in this market does not perfectly align market outcomes with user utility.
arXiv Detail & Related papers (2022-08-30T17:43:58Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Leveraging cross-platform data to improve automated hate speech
detection [0.0]
Most existing approaches for hate speech detection focus on a single social media platform in isolation.
Here we propose a new cross-platform approach to detect hate speech which leverages multiple datasets and classification models from different platforms.
We demonstrate how this approach outperforms existing models, and achieves good performance when tested on messages from novel social media platforms.
arXiv Detail & Related papers (2021-02-09T15:52:34Z) - The COVID-19 Infodemic: Twitter versus Facebook [5.135597127873748]
We analyze the prevalence and diffusion of links to low-credibility content on Twitter and Facebook.
A minority of accounts and pages exert a strong influence on each platform.
The overt nature of this manipulation points to the need for societal-level solutions.
arXiv Detail & Related papers (2020-12-17T02:00:43Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - ETHOS: an Online Hate Speech Detection Dataset [6.59720246184989]
We present 'ETHOS', a textual dataset with two variants: binary and multi-label, based on YouTube and Reddit comments validated using the Figure-Eight crowdsourcing platform.
Our key assumption is that, even gaining a small amount of labelled data from such a time-consuming process, we can guarantee hate speech occurrences in the examined material.
arXiv Detail & Related papers (2020-06-11T08:59:57Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.