Keeping it Authentic: The Social Footprint of the Trolls Network
- URL: http://arxiv.org/abs/2409.07720v1
- Date: Thu, 12 Sep 2024 03:02:52 GMT
- Title: Keeping it Authentic: The Social Footprint of the Trolls Network
- Authors: Ori Swed, Sachith Dassanayaka, Dimitri Volchenkov,
- Abstract summary: In 2016, a network of social media accounts animated by Russian operatives attempted to divert political discourse within the American public around the presidential elections.
We argue that pretending to be legitimate social actors obliges the network to adhere to social expectations.
To test the robustness of this social footprint we train artificial intelligence to identify it and create a predictive model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In 2016, a network of social media accounts animated by Russian operatives attempted to divert political discourse within the American public around the presidential elections. This was a coordinated effort, part of a Russian-led complex information operation. Utilizing the anonymity and outreach of social media platforms Russian operatives created an online astroturf that is in direct contact with regular Americans, promoting Russian agenda and goals. The elusiveness of this type of adversarial approach rendered security agencies helpless, stressing the unique challenges this type of intervention presents. Building on existing scholarship on the functions within influence networks on social media, we suggest a new approach to map those types of operations. We argue that pretending to be legitimate social actors obliges the network to adhere to social expectations, leaving a social footprint. To test the robustness of this social footprint we train artificial intelligence to identify it and create a predictive model. We use Twitter data identified as part of the Russian influence network for training the artificial intelligence and to test the prediction. Our model attains 88% prediction accuracy for the test set. Testing our prediction on two additional models results in 90.7% and 90.5% accuracy, validating our model. The predictive and validation results suggest that building a machine learning model around social functions within the Russian influence network can be used to map its actors and functions.
Related papers
- Mapping the Russian Internet Troll Network on Twitter using a Predictive Model [0.0]
Russian Internet Trolls use fake personas to spread disinformation through multiple social media streams.
We create a predictive model to map the network operations.
Our model attains 88% prediction accuracy for the test set.
arXiv Detail & Related papers (2024-09-11T19:09:21Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - Design and analysis of tweet-based election models for the 2021 Mexican
legislative election [55.41644538483948]
We use a dataset of 15 million election-related tweets in the six months preceding election day.
We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods.
arXiv Detail & Related papers (2023-01-02T12:40:05Z) - Trust and Believe -- Should We? Evaluating the Trustworthiness of
Twitter Users [5.695742189917657]
Fake news on social media is a major problem with far-reaching negative repercussions on both individuals and society.
In this work, we create a model through which we hope to offer a solution that will instill trust in social network communities.
Our model analyses the behaviour of 50,000 politicians on Twitter and assigns an influence score for each evaluated user.
arXiv Detail & Related papers (2022-10-27T06:57:19Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical
Reinforcement Learning [31.33996447671789]
We show that it is possible for adversaries to exploit computational learning mechanism such as reinforcement learning (RL) to maximize the influence of socialbots while avoiding being detected.
Our proposed policy networks train with a vast amount of synthetic graphs and generalize better than baselines on unseen real-life graphs.
This makes our approach a practical adversarial attack when deployed in a real-life setting.
arXiv Detail & Related papers (2021-10-20T16:49:26Z) - Are socially-aware trajectory prediction models really socially-aware? [75.36961426916639]
We introduce a socially-attended attack to assess the social understanding of prediction models.
An attack is a small yet carefully-crafted perturbations to fail predictors.
We show that our attack can be employed to increase the social understanding of state-of-the-art models.
arXiv Detail & Related papers (2021-08-24T17:59:09Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z) - Automatic Detection of Influential Actors in Disinformation Networks [0.0]
This paper presents an end-to-end framework to automate detection of disinformation narratives, networks, and influential actors.
System detects IO accounts with 96% precision, 79% recall, and 96% area-under-the-PR-curve.
Results are corroborated with independent sources of known IO accounts from U.S. Congressional reports, investigative journalism, and IO datasets provided by Twitter.
arXiv Detail & Related papers (2020-05-21T20:15:51Z) - I Know Where You Are Coming From: On the Impact of Social Media Sources
on AI Model Performance [79.05613148641018]
We will study the performance of different machine learning models when being learned on multi-modal data from different social networks.
Our initial experimental results reveal that social network choice impacts the performance.
arXiv Detail & Related papers (2020-02-05T11:10:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.