Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics
- URL: http://arxiv.org/abs/1907.06130v5
- Date: Wed, 12 Jun 2024 00:43:02 GMT
- Title: Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics
- Authors: Bao Tran Truong, Xiaodan Lou, Alessandro Flammini, Filippo Menczer,
- Abstract summary: We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
- Score: 43.98568073610101
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social media, seen by some as the modern public square, is vulnerable to manipulation. By controlling inauthentic accounts impersonating humans, malicious actors can amplify disinformation within target communities. The consequences of such operations are difficult to evaluate due to the challenges posed by collecting data and carrying out ethical experiments that would influence online communities. Here we use a social media model that simulates information diffusion in an empirical network to quantify the impacts of several adversarial manipulation tactics on the quality of content. We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation. Among the explored tactics that bad actors can employ, infiltrating a community is the most likely to make low-quality content go viral. Such harm can be further compounded by inauthentic agents flooding the network with low-quality, yet appealing content, but is mitigated when bad actors focus on specific targets, such as influential or vulnerable individuals. These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
Related papers
- Easy-access online social media metrics can effectively identify misinformation sharing users [41.94295877935867]
We find that higher tweet frequency is positively associated with low factuality in shared content, while account age is negatively associated with it.
Our findings show that relying on these easy-access social network metrics could serve as a low-barrier approach for initial identification of users who are more likely to spread misinformation.
arXiv Detail & Related papers (2024-08-27T16:41:13Z) - The Illusion of Anonymity: Uncovering the Impact of User Actions on Privacy in Web3 Social Ecosystems [11.501563549824466]
We investigate the nuanced dynamics between user engagement on Web3 social platforms and the consequent privacy concerns.
We scrutinize the widespread phenomenon of fabricated activities, which encompasses the establishment of bogus accounts aimed at mimicking popularity.
We highlight the urgent need for more stringent privacy measures and ethical protocols to navigate the complex web of social exchanges.
arXiv Detail & Related papers (2024-05-22T06:26:15Z) - Social Media Influence Operations [0.0]
This article reviews developments at the intersection of Large Language Models (LLMs) and influence operations.
LLMs are able to generate targeted and persuasive text which is for the most part indistinguishable from human-written content.
mitigation measures for the near future are highlighted.
arXiv Detail & Related papers (2023-09-07T12:18:07Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - Trust and Believe -- Should We? Evaluating the Trustworthiness of
Twitter Users [5.695742189917657]
Fake news on social media is a major problem with far-reaching negative repercussions on both individuals and society.
In this work, we create a model through which we hope to offer a solution that will instill trust in social network communities.
Our model analyses the behaviour of 50,000 politicians on Twitter and assigns an influence score for each evaluated user.
arXiv Detail & Related papers (2022-10-27T06:57:19Z) - The Impact of Disinformation on a Controversial Debate on Social Media [1.299941371793082]
We study how pervasive is the presence of disinformation in the Italian debate around immigration on Twitter.
By characterising the Twitter users with an textitUntrustworthiness score, we are able to see that such bad information consumption habits are not equally distributed across the users.
arXiv Detail & Related papers (2021-06-30T10:29:07Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.