RoBCtrl: Attacking GNN-Based Social Bot Detectors via Reinforced Manipulation of Bots Control Interaction
- URL: http://arxiv.org/abs/2510.16035v1
- Date: Thu, 16 Oct 2025 02:41:49 GMT
- Title: RoBCtrl: Attacking GNN-Based Social Bot Detectors via Reinforced Manipulation of Bots Control Interaction
- Authors: Yingguang Yang, Xianghua Zeng, Qi Wu, Hao Peng, Yutong Xia, Hao Liu, Bin Chong, Philip S. Yu,
- Abstract summary: This paper proposes the first adversarial multi-agent Reinforcement learning framework for social Bot control attacks (RoBCtrl)<n> Specifically, we use a diffusion model to generate high-fidelity bot accounts by reconstructing existing account data with minor modifications.<n>We then employ a Multi-Agent Reinforcement Learning (MARL) method to simulate bots adversarial behavior.
- Score: 51.46634975923564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social networks have become a crucial source of real-time information for individuals. The influence of social bots within these platforms has garnered considerable attention from researchers, leading to the development of numerous detection technologies. However, the vulnerability and robustness of these detection methods is still underexplored. Existing Graph Neural Network (GNN)-based methods cannot be directly applied due to the issues of limited control over social agents, the black-box nature of bot detectors, and the heterogeneity of bots. To address these challenges, this paper proposes the first adversarial multi-agent Reinforcement learning framework for social Bot control attacks (RoBCtrl) targeting GNN-based social bot detectors. Specifically, we use a diffusion model to generate high-fidelity bot accounts by reconstructing existing account data with minor modifications, thereby evading detection on social platforms. To the best of our knowledge, this is the first application of diffusion models to mimic the behavior of evolving social bots effectively. We then employ a Multi-Agent Reinforcement Learning (MARL) method to simulate bots adversarial behavior. We categorize social accounts based on their influence and budget. Different agents are then employed to control bot accounts across various categories, optimizing the attachment strategy through reinforcement learning. Additionally, a hierarchical state abstraction based on structural entropy is designed to accelerate the reinforcement learning. Extensive experiments on social bot detection datasets demonstrate that our framework can effectively undermine the performance of GNN-based detectors.
Related papers
- FATe of Bots: Ethical Considerations of Social Bot Detection [1.8470340645800405]
We examine the ethical implications for social bot detection systems through three pillars: training datasets, algorithm development, and the use of bot agents.<n>We aim to inspire more responsible and equitable approaches towards improving the social media bot detection landscape.
arXiv Detail & Related papers (2026-02-05T01:53:17Z) - SeBot: Structural Entropy Guided Multi-View Contrastive Learning for Social Bot Detection [34.68635583099056]
We propose SEBot, a novel multi-view graph-based contrastive learning-enabled social bot detector.
In particular, we use structural entropy as an uncertainty metric to optimize the entire graph's structure.
And we design an encoder to enable message passing beyond the homophily assumption.
arXiv Detail & Related papers (2024-05-18T08:16:11Z) - BotSSCL: Social Bot Detection with Self-Supervised Contrastive Learning [6.317191658158437]
We propose a novel framework for social Bot detection with Self-Supervised Contrastive Learning (BotSSCL)
BotSSCL uses contrastive learning to distinguish between social bots and humans in the embedding space to improve linear separability.
We demonstrate BotSSCL's robustness against adversarial attempts to manipulate bot accounts to evade detection.
arXiv Detail & Related papers (2024-02-06T06:13:13Z) - What Does the Bot Say? Opportunities and Risks of Large Language Models in Social Media Bot Detection [48.572932773403274]
We investigate the opportunities and risks of large language models in social bot detection.
We propose a mixture-of-heterogeneous-experts framework to divide and conquer diverse user information modalities.
Experiments show that instruction tuning on 1,000 annotated examples produces specialized LLMs that outperform state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-01T06:21:19Z) - My Brother Helps Me: Node Injection Based Adversarial Attack on Social Bot Detection [69.99192868521564]
Social platforms such as Twitter are under siege from a multitude of fraudulent users.
Due to the structure of social networks, the majority of methods are based on the graph neural network(GNN), which is susceptible to attacks.
We propose a node injection-based adversarial attack method designed to deceive bot detection models.
arXiv Detail & Related papers (2023-10-11T03:09:48Z) - RoSGAS: Adaptive Social Bot Detection with Reinforced Self-Supervised
GNN Architecture Search [12.567692688720353]
Social bots are automated accounts on social networks that make attempts to behave like human.
In this paper, we propose RoSGAS, a novel Reinforced and Self-supervised GNN Architecture Search framework.
We exploit heterogeneous information network to present the user connectivity by leveraging account metadata, relationships, behavioral features and content features.
Experiments on 5 Twitter datasets show that RoSGAS outperforms the state-of-the-art approaches in terms of accuracy, training efficiency and stability.
arXiv Detail & Related papers (2022-06-14T11:12:02Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical
Reinforcement Learning [31.33996447671789]
We show that it is possible for adversaries to exploit computational learning mechanism such as reinforcement learning (RL) to maximize the influence of socialbots while avoiding being detected.
Our proposed policy networks train with a vast amount of synthetic graphs and generalize better than baselines on unseen real-life graphs.
This makes our approach a practical adversarial attack when deployed in a real-life setting.
arXiv Detail & Related papers (2021-10-20T16:49:26Z) - Detection of Novel Social Bots by Ensembles of Specialized Classifiers [60.63582690037839]
Malicious actors create inauthentic social media accounts controlled in part by algorithms, known as social bots, to disseminate misinformation and agitate online discussion.
We show that different types of bots are characterized by different behavioral features.
We propose a new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule.
arXiv Detail & Related papers (2020-06-11T22:59:59Z) - Automating Botnet Detection with Graph Neural Networks [106.24877728212546]
Botnets are now a major source for many network attacks, such as DDoS attacks and spam.
In this paper, we consider the neural network design challenges of using modern deep learning techniques to learn policies for botnet detection automatically.
arXiv Detail & Related papers (2020-03-13T15:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.