Is radicalization reinforced by social media censorship?
- URL: http://arxiv.org/abs/2103.12842v1
- Date: Tue, 23 Mar 2021 21:07:34 GMT
- Title: Is radicalization reinforced by social media censorship?
- Authors: Justin E. Lane, Kevin McCaffree, F. LeRon Shults
- Abstract summary: Radicalized beliefs, such as those tied to QAnon, Russiagate, and other political conspiracy theories, can lead some individuals and groups to engage in violent behavior.
This article presents and agent-based model of a social media network that enables investigation of the effects of censorship on the amount of dissenting information.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Radicalized beliefs, such as those tied to QAnon, Russiagate, and other
political conspiracy theories, can lead some individuals and groups to engage
in violent behavior, as evidenced in recent months. Understanding the
mechanisms by which such beliefs are accepted, spread, and intensified is
critical for any attempt to mitigate radicalization and avoid increased
political polarization. This article presents and agent-based model of a social
media network that enables investigation of the effects of censorship on the
amount of dissenting information to which agents become exposed and the
certainty of their radicalized views. The model explores two forms of
censorship: 1) decentralized censorship-in which individuals can choose to
break an online social network tie (unfriend or unfollow) with another
individual who transmits conflicting beliefs and 2) centralized censorship-in
which a single authority can ban an individual from the social media network
for spreading a certain type of belief. This model suggests that both forms of
censorship increase certainty in radicalized views by decreasing the amount of
dissent to which an agent is exposed, but centralized "banning" of individuals
has the strongest effect on radicalization.
Related papers
- CensorLab: A Testbed for Censorship Experimentation [15.411134921415567]
We design and implement CensorLab, a generic platform for emulating Internet censorship scenarios.
CensorLab aims to support all censorship mechanisms previously or currently deployed by real-world censors.
It provides an easy-to-use platform for researchers and practitioners enabling them to perform extensive experimentation.
arXiv Detail & Related papers (2024-12-20T21:17:24Z) - Toxic behavior silences online political conversations [0.0]
We investigate the hypothesis that individuals may refrain from expressing minority opinions publicly due to being exposed to toxic behavior.
Using hidden Markov models, we identify a latent state consistent with toxicity-driven silence.
Our findings offer insights into the intricacies of online political deliberation and emphasize the importance of considering self-censorship dynamics.
arXiv Detail & Related papers (2024-12-07T20:39:20Z) - Pathfinder: Exploring Path Diversity for Assessing Internet Censorship Inconsistency [8.615061541238589]
We investigate Internet censorship from a different perspective by scrutinizing the diverse censorship deployment inside a country.
We reveal that the diversity of Internet censorship caused by different routing paths inside a country is prevalent.
We identify that different hosting platforms also result in inconsistent censorship activities due to different peering relationships with the ISPs in a country.
arXiv Detail & Related papers (2024-07-05T01:48:31Z) - Understanding Divergent Framing of the Supreme Court Controversies:
Social Media vs. News Outlets [56.67097829383139]
We focus on the nuanced distinctions in framing of social media and traditional media outlets concerning a series of U.S. Supreme Court rulings.
We observe significant polarization in the news media's treatment of affirmative action and abortion rights, whereas the topic of student loans tends to exhibit a greater degree of consensus.
arXiv Detail & Related papers (2023-09-18T06:40:21Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - How We Express Ourselves Freely: Censorship, Self-censorship, and
Anti-censorship on a Chinese Social Media [4.408128846525362]
We identify the metrics of censorship and self-censorship, find the influence factors, and construct a mediation model to measure their relationship.
Based on these findings, we discuss implications for democratic social media design and future censorship research.
arXiv Detail & Related papers (2022-11-24T18:28:16Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Reaching the bubble may not be enough: news media role in online
political polarization [58.720142291102135]
A way of reducing polarization would be by distributing cross-partisan news among individuals with distinct political orientations.
This study investigates whether this holds in the context of nationwide elections in Brazil and Canada.
arXiv Detail & Related papers (2021-09-18T11:34:04Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - #ISIS vs #ActionCountersTerrorism: A Computational Analysis of Extremist
and Counter-extremist Twitter Narratives [2.685668802278155]
This study will apply computational techniques to analyse the narratives of various pro-extremist and counter-extremist Twitter accounts.
Our findings show that pro-extremist accounts often use different strategies to disseminate content when compared to counter-extremist accounts across different types of organisations.
arXiv Detail & Related papers (2020-08-26T20:46:45Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.