Unmasking Social Bots: How Confident Are We?
- URL: http://arxiv.org/abs/2407.13929v1
- Date: Thu, 18 Jul 2024 22:33:52 GMT
- Title: Unmasking Social Bots: How Confident Are We?
- Authors: James Giroux, Ariyarathne Gangani, Alexander C. Nwala, Cristiano Fanelli,
- Abstract summary: We propose to address both bot detection and the quantification of uncertainty at the account level.
This dual focus is crucial as it allows us to leverage additional information related to the quantified uncertainty of each prediction.
Specifically, our approach facilitates targeted interventions for bots when predictions are made with high confidence and suggests caution (e.g., gathering more data) when predictions are uncertain.
- Score: 41.94295877935867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social bots remain a major vector for spreading disinformation on social media and a menace to the public. Despite the progress made in developing multiple sophisticated social bot detection algorithms and tools, bot detection remains a challenging, unsolved problem that is fraught with uncertainty due to the heterogeneity of bot behaviors, training data, and detection algorithms. Detection models often disagree on whether to label the same account as bot or human-controlled. However, they do not provide any measure of uncertainty to indicate how much we should trust their results. We propose to address both bot detection and the quantification of uncertainty at the account level - a novel feature of this research. This dual focus is crucial as it allows us to leverage additional information related to the quantified uncertainty of each prediction, thereby enhancing decision-making and improving the reliability of bot classifications. Specifically, our approach facilitates targeted interventions for bots when predictions are made with high confidence and suggests caution (e.g., gathering more data) when predictions are uncertain.
Related papers
- Adversarial Botometer: Adversarial Analysis for Social Bot Detection [1.9280536006736573]
Social bots produce content that mimics human creativity.
Malicious social bots emerge to deceive people with their unrealistic content.
We evaluate the behavior of a text-based bot detector in a competitive environment.
arXiv Detail & Related papers (2024-05-03T11:28:21Z) - BotSSCL: Social Bot Detection with Self-Supervised Contrastive Learning [6.317191658158437]
We propose a novel framework for social Bot detection with Self-Supervised Contrastive Learning (BotSSCL)
BotSSCL uses contrastive learning to distinguish between social bots and humans in the embedding space to improve linear separability.
We demonstrate BotSSCL's robustness against adversarial attempts to manipulate bot accounts to evade detection.
arXiv Detail & Related papers (2024-02-06T06:13:13Z) - What Does the Bot Say? Opportunities and Risks of Large Language Models in Social Media Bot Detection [48.572932773403274]
We investigate the opportunities and risks of large language models in social bot detection.
We propose a mixture-of-heterogeneous-experts framework to divide and conquer diverse user information modalities.
Experiments show that instruction tuning on 1,000 annotated examples produces specialized LLMs that outperform state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-01T06:21:19Z) - My Brother Helps Me: Node Injection Based Adversarial Attack on Social Bot Detection [69.99192868521564]
Social platforms such as Twitter are under siege from a multitude of fraudulent users.
Due to the structure of social networks, the majority of methods are based on the graph neural network(GNN), which is susceptible to attacks.
We propose a node injection-based adversarial attack method designed to deceive bot detection models.
arXiv Detail & Related papers (2023-10-11T03:09:48Z) - From Online Behaviours to Images: A Novel Approach to Social Bot
Detection [0.3867363075280544]
A particular type of social accounts is known to promote unreputable content, hyperpartisan, and propagandistic information.
We propose a novel approach to bot detection: we first propose a new algorithm that transforms the sequence of actions that an account performs into an image.
We compare our performances with state-of-the-art results for bot detection on genuine accounts / bot accounts datasets well known in the literature.
arXiv Detail & Related papers (2023-04-15T11:36:50Z) - BotShape: A Novel Social Bots Detection Approach via Behavioral Patterns [4.386183132284449]
Based on a real-world data set, we construct behavioral sequences from raw event logs.
We observe differences between bots and genuine users and similar patterns among bot accounts.
We present a novel social bot detection system BotShape, to automatically catch behavioral sequences and characteristics.
arXiv Detail & Related papers (2023-03-17T19:03:06Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - Bot-Match: Social Bot Detection with Recursive Nearest Neighbors Search [9.457368716414079]
Social bots have emerged over the last decade, initially creating a nuisance while more recently used to intimidate journalists, sway electoral events, and aggravate existing social fissures.
This social threat has spawned a bot detection algorithms race in which detection algorithms evolve in an attempt to keep up with increasingly sophisticated bot accounts.
This gap means that researchers, journalists, and analysts daily identify malicious bot accounts that are undetected by state of the art supervised bot detection algorithms.
A similarity based algorithm could complement existing supervised and unsupervised methods and fill this gap.
arXiv Detail & Related papers (2020-07-15T11:48:24Z) - Detection of Novel Social Bots by Ensembles of Specialized Classifiers [60.63582690037839]
Malicious actors create inauthentic social media accounts controlled in part by algorithms, known as social bots, to disseminate misinformation and agitate online discussion.
We show that different types of bots are characterized by different behavioral features.
We propose a new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule.
arXiv Detail & Related papers (2020-06-11T22:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.