Understanding Online Migration Decisions Following the Banning of
Radical Communities
- URL: http://arxiv.org/abs/2212.04765v1
- Date: Fri, 9 Dec 2022 10:43:15 GMT
- Title: Understanding Online Migration Decisions Following the Banning of
Radical Communities
- Authors: Giuseppe Russo and Manoel Horta Ribeiro and Giona Casiraghi and Luca
Verginer
- Abstract summary: We study how factors associated with the RECRO radicalization framework relate to users' migration decisions.
Our results show that individual-level factors, those relating to the behavior of users, are associated with the decision to post on the fringe platform.
- Score: 0.2752817022620644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of radical online communities and their violent offshoots
has sparked great societal concern. However, the current practice of banning
such communities from mainstream platforms has unintended consequences: (I) the
further radicalization of their members in fringe platforms where they migrate;
and (ii) the spillover of harmful content from fringe back onto mainstream
platforms. Here, in a large observational study on two banned subreddits,
r/The\_Donald and r/fatpeoplehate, we examine how factors associated with the
RECRO radicalization framework relate to users' migration decisions.
Specifically, we quantify how these factors affect users' decisions to post on
fringe platforms and, for those who do, whether they continue posting on the
mainstream platform. Our results show that individual-level factors, those
relating to the behavior of users, are associated with the decision to post on
the fringe platform. Whereas social-level factors, users' connection with the
radical community, only affect the propensity to be coactive on both platforms.
Overall, our findings pave the way for evidence-based moderation policies, as
the decisions to migrate and remain coactive amplify unintended consequences of
community bans.
Related papers
- On the Use of Proxies in Political Ad Targeting [49.61009579554272]
We show that major political advertisers circumvented mitigations by targeting proxy attributes.
Our findings have crucial implications for the ongoing discussion on the regulation of political advertising.
arXiv Detail & Related papers (2024-10-18T17:15:13Z) - Stranger Danger! Cross-Community Interactions with Fringe Users Increase
the Growth of Fringe Communities on Reddit [14.060809879399386]
We study the impact of fringe-interactions on the growth of three fringe communities on Reddit.
Our results indicate that fringe-interactions attract new members to fringe communities.
Interactions using toxic language have a 5pp higher chance of attracting newcomers to fringe communities than non-toxic interactions.
arXiv Detail & Related papers (2023-10-18T07:26:36Z) - Dynamics of Ideological Biases of Social Media Users [0.0]
We show that the evolution of online platform-wide opinion groups is driven by the desire to hold popular opinions.
We focus on two social media: Twitter and Parler, on which we tracked the political biases of their users.
arXiv Detail & Related papers (2023-09-27T19:39:07Z) - Non-Polar Opposites: Analyzing the Relationship Between Echo Chambers
and Hostile Intergroup Interactions on Reddit [66.09950457847242]
We study the activity of 5.97M Reddit users and 421M comments posted over 13 years.
We create a typology of relationships between political communities based on whether their users are toxic to each other.
arXiv Detail & Related papers (2022-11-25T22:17:07Z) - Spillover of Antisocial Behavior from Fringe Platforms: The Unintended
Consequences of Community Banning [0.2752817022620644]
We show that participating in fringe communities on Reddit increases users' toxicity and involvement with subreddits similar to the banned community.
In short, we find evidence for a spillover of antisocial behavior from fringe platforms onto Reddit via co-participation.
arXiv Detail & Related papers (2022-09-20T15:48:27Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go [44.774035806004214]
harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
arXiv Detail & Related papers (2021-02-27T08:01:10Z) - Do Platform Migrations Compromise Content Moderation? Evidence from
r/The_Donald and r/Incels [20.41491269475746]
We report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures.
Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform.
In spite of that, users in one of the studied communities showed increases in signals associated with toxicity and radicalization.
arXiv Detail & Related papers (2020-10-20T16:03:06Z) - Right and left, partisanship predicts (asymmetric) vulnerability to
misinformation [71.46564239895892]
We analyze the relationship between partisanship, echo chambers, and vulnerability to online misinformation by studying news sharing behavior on Twitter.
We find that vulnerability to misinformation is most strongly influenced by partisanship for both left- and right-leaning users.
arXiv Detail & Related papers (2020-10-04T01:36:14Z) - Breaking the Communities: Characterizing community changing users using
text mining and graph machine learning on Twitter [0.0]
We study users who break their community on Twitter using natural language processing techniques and graph machine learning algorithms.
We collected 9 million Twitter messages from 1.5 million users and constructed the retweet networks.
We present a machine learning framework for social media users classification which detects "community breakers"
arXiv Detail & Related papers (2020-08-24T23:44:51Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.