The Impact of Disinformation on a Controversial Debate on Social Media
- URL: http://arxiv.org/abs/2106.15968v1
- Date: Wed, 30 Jun 2021 10:29:07 GMT
- Title: The Impact of Disinformation on a Controversial Debate on Social Media
- Authors: Salvatore Vilella, Alfonso Semeraro, Daniela Paolotti, Giancarlo Ruffo
- Abstract summary: We study how pervasive is the presence of disinformation in the Italian debate around immigration on Twitter.
By characterising the Twitter users with an textitUntrustworthiness score, we are able to see that such bad information consumption habits are not equally distributed across the users.
- Score: 1.299941371793082
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we study how pervasive is the presence of disinformation in the
Italian debate around immigration on Twitter and the role of automated accounts
in the diffusion of such content. By characterising the Twitter users with an
\textit{Untrustworthiness} score, that tells us how frequently they engage with
disinformation content, we are able to see that such bad information
consumption habits are not equally distributed across the users; adopting a
network analysis approach, we can identify communities characterised by a very
high presence of users that frequently share content from unreliable news
sources. Within this context, social bots tend to inject in the network more
malicious content, that often remains confined in a limited number of clusters;
instead, they target reliable content in order to diversify their reach. The
evidence we gather suggests that, at least in this particular case study, there
is a strong interplay between social bots and users engaging with unreliable
content, influencing the diffusion of the latter across the network.
Related papers
- Easy-access online social media metrics can effectively identify misinformation sharing users [41.94295877935867]
We find that higher tweet frequency is positively associated with low factuality in shared content, while account age is negatively associated with it.
Our findings show that relying on these easy-access social network metrics could serve as a low-barrier approach for initial identification of users who are more likely to spread misinformation.
arXiv Detail & Related papers (2024-08-27T16:41:13Z) - Nothing Stands Alone: Relational Fake News Detection with Hypergraph
Neural Networks [49.29141811578359]
We propose to leverage a hypergraph to represent group-wise interaction among news, while focusing on important news relations with its dual-level attention mechanism.
Our approach yields remarkable performance and maintains the high performance even with a small subset of labeled news data.
arXiv Detail & Related papers (2022-12-24T00:19:32Z) - Characterizing User Susceptibility to COVID-19 Misinformation on Twitter [40.0762273487125]
This study attempts to answer it who constitutes the population vulnerable to the online misinformation in the pandemic.
We distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation.
We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation.
arXiv Detail & Related papers (2021-09-20T13:31:15Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Network Inference from a Mixture of Diffusion Models for Fake News
Mitigation [12.229596498611837]
The dissemination of fake news intended to deceive people, influence public opinion and manipulate social outcomes has become a pressing problem on social media.
We focus on understanding and leveraging diffusion dynamics of false and legitimate contents in order to facilitate network interventions for fake news mitigation.
arXiv Detail & Related papers (2020-08-08T05:59:25Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z) - Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics [43.98568073610101]
We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
arXiv Detail & Related papers (2019-07-13T21:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.