Unsupervised detection of coordinated fake-follower campaigns on social
media
- URL: http://arxiv.org/abs/2310.20407v1
- Date: Tue, 31 Oct 2023 12:30:29 GMT
- Title: Unsupervised detection of coordinated fake-follower campaigns on social
media
- Authors: Yasser Zouzou and Onur Varol
- Abstract summary: We present a novel unsupervised detection method designed to target a specific category of malicious accounts.
Our framework identifies anomalous following patterns among all the followers of a social media account.
We find that these detected groups of anomalous followers exhibit consistent behavior across multiple accounts.
- Score: 1.3035246321276739
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automated social media accounts, known as bots, are increasingly recognized
as key tools for manipulative online activities. These activities can stem from
coordination among several accounts and these automated campaigns can
manipulate social network structure by following other accounts, amplifying
their content, and posting messages to spam online discourse. In this study, we
present a novel unsupervised detection method designed to target a specific
category of malicious accounts designed to manipulate user metrics such as
online popularity. Our framework identifies anomalous following patterns among
all the followers of a social media account. Through the analysis of a large
number of accounts on the Twitter platform (rebranded as Twitter after the
acquisition of Elon Musk), we demonstrate that irregular following patterns are
prevalent and are indicative of automated fake accounts. Notably, we find that
these detected groups of anomalous followers exhibit consistent behavior across
multiple accounts. This observation, combined with the computational efficiency
of our proposed approach, makes it a valuable tool for investigating
large-scale coordinated manipulation campaigns on social media platforms.
Related papers
- Unraveling the Web of Disinformation: Exploring the Larger Context of State-Sponsored Influence Campaigns on Twitter [16.64763746842362]
We study 19 state-sponsored disinformation campaigns that took place on Twitter, originating from various countries.
We build a machine learning-based classifier that can correctly identify up to 94% of accounts from unseen campaigns.
We also run our system in the wild and find more accounts that could potentially belong to state-backed operations.
arXiv Detail & Related papers (2024-07-25T15:03:33Z) - User Identity Linkage in Social Media Using Linguistic and Social
Interaction Features [11.781485566149994]
User identity linkage aims to reveal social media accounts likely to belong to the same natural person.
This work proposes a machine learning-based detection model, which uses multiple attributes of users' online activity.
The models efficacy is demonstrated on two cases on abusive and terrorism-related Twitter content.
arXiv Detail & Related papers (2023-08-22T15:10:38Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Detecting fake accounts through Generative Adversarial Network in online
social media [0.0]
This paper proposes a novel method using user similarity measures and the Generative Adversarial Network (GAN) algorithm to identify fake user accounts in the Twitter dataset.
Despite the problem's complexity, the method achieves an AUC rate of 80% in classifying and detecting fake accounts.
arXiv Detail & Related papers (2022-10-25T10:20:27Z) - Manipulating Twitter Through Deletions [64.33261764633504]
Research into influence campaigns on Twitter has mostly relied on identifying malicious activities from tweets obtained via public APIs.
Here, we provide the first exhaustive, large-scale analysis of anomalous deletion patterns involving more than a billion deletions by over 11 million accounts.
We find that a small fraction of accounts delete a large number of tweets daily.
First, limits on tweet volume are circumvented, allowing certain accounts to flood the network with over 26 thousand daily tweets.
Second, coordinated networks of accounts engage in repetitive likes and unlikes of content that is eventually deleted, which can manipulate ranking algorithms.
arXiv Detail & Related papers (2022-03-25T20:07:08Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - Relational Graph Neural Networks for Fraud Detection in a Super-App
environment [53.561797148529664]
We propose a framework of relational graph convolutional networks methods for fraudulent behaviour prevention in the financial services of a Super-App.
We use an interpretability algorithm for graph neural networks to determine the most important relations to the classification task of the users.
Our results show that there is an added value when considering models that take advantage of the alternative data of the Super-App and the interactions found in their high connectivity.
arXiv Detail & Related papers (2021-07-29T00:02:06Z) - Misleading Repurposing on Twitter [3.0254442724635173]
We present the first in-depth and large-scale study of misleading repurposing.
A malicious user changes the identity of their social media account via, among other things, changes to the profile attributes in order to use the account for a new purpose while retaining their followers.
We propose a definition for the behavior and a methodology that uses supervised learning on data mined from the Internet Archive's Twitter Stream Grab to flag repurposed accounts.
arXiv Detail & Related papers (2020-10-20T20:19:01Z) - Detection of Novel Social Bots by Ensembles of Specialized Classifiers [60.63582690037839]
Malicious actors create inauthentic social media accounts controlled in part by algorithms, known as social bots, to disseminate misinformation and agitate online discussion.
We show that different types of bots are characterized by different behavioral features.
We propose a new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule.
arXiv Detail & Related papers (2020-06-11T22:59:59Z) - Learning with Weak Supervision for Email Intent Detection [56.71599262462638]
We propose to leverage user actions as a source of weak supervision to detect intents in emails.
We develop an end-to-end robust deep neural network model for email intent identification.
arXiv Detail & Related papers (2020-05-26T23:41:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.