Optimal Transport-Guided Adversarial Attacks on Graph Neural Network-Based Bot Detection
- URL: http://arxiv.org/abs/2602.00318v1
- Date: Fri, 30 Jan 2026 21:13:40 GMT
- Title: Optimal Transport-Guided Adversarial Attacks on Graph Neural Network-Based Bot Detection
- Authors: Kunal Mukherjee, Zulfikar Alom, Tran Gia Bao Ngo, Cuneyt Gurcan Akcora, Murat Kantarcioglu,
- Abstract summary: We introduce BOCLOAK to evaluate the robustness of GNN-based social bot detection via both edge editing and node injection adversarial attacks under realistic constraints.<n>BOCLOAK achieves up to 80.13% higher attack success rates while using 99.80% less memory under realistic real-world constraints.<n>Most importantly, BOCLOAK shows that optimal transport provides a principled, principled framework for bridging the gap between adversarial attacks and real-world bot detection.
- Score: 15.837407035335653
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rise of bot accounts on social media poses significant risks to public discourse. To address this threat, modern bot detectors increasingly rely on Graph Neural Networks (GNNs). However, the effectiveness of these GNN-based detectors in real-world settings remains poorly understood. In practice, attackers continuously adapt their strategies as well as must operate under domain-specific and temporal constraints, which can fundamentally limit the applicability of existing attack methods. As a result, there is a critical need for robust GNN-based bot detection methods under realistic, constraint-aware attack scenarios. To address this gap, we introduce BOCLOAK to systematically evaluate the robustness of GNN-based social bot detection via both edge editing and node injection adversarial attacks under realistic constraints. BOCLOAK constructs a probability measure over spatio-temporal neighbor features and learns an optimal transport geometry that separates human and bot behaviors. It then decodes transport plans into sparse, plausible edge edits that evade detection while obeying real-world constraints. We evaluate BOCLOAK across three social bot datasets, five state-of-the-art bot detectors, three adversarial defenses, and compare it against four leading graph adversarial attack baselines. BOCLOAK achieves up to 80.13% higher attack success rates while using 99.80% less GPU memory under realistic real-world constraints. Most importantly, BOCLOAK shows that optimal transport provides a lightweight, principled framework for bridging the gap between adversarial attacks and real-world bot detection.
Related papers
- RABot: Reinforcement-Guided Graph Augmentation for Imbalanced and Noisy Social Bot Detection [16.050137938655364]
Social bot detection is pivotal for safeguarding the integrity of online information ecosystems.<n>Recent graph neural network (GNN) solutions achieve strong results, but they remain hindered by two practical challenges.<n>We propose the Reinforcement-guided graph Augmentation social Bot detector (RABot)<n> RABot employs a neighborhood-aware oversampling strategy that linearly interpolates minority-class embeddings within local subgraphs.
arXiv Detail & Related papers (2026-02-25T10:02:57Z) - Non-Intrusive Graph-Based Bot Detection for E-Commerce Using Inductive Graph Neural Networks [4.230025065044209]
Malicious bots pose a growing threat to e-commerce platforms by scraping data, hoarding inventory, and perpetrating fraud.<n>Traditional bot mitigation techniques, including IP blacklists and CAPTCHA-based challenges, are increasingly ineffective or intrusive.<n>This work proposes a non-intrusive graph-based bot detection framework for e-commerce that models user session behavior through a graph representation.
arXiv Detail & Related papers (2026-01-30T05:21:32Z) - RoBCtrl: Attacking GNN-Based Social Bot Detectors via Reinforced Manipulation of Bots Control Interaction [51.46634975923564]
This paper proposes the first adversarial multi-agent Reinforcement learning framework for social Bot control attacks (RoBCtrl)<n> Specifically, we use a diffusion model to generate high-fidelity bot accounts by reconstructing existing account data with minor modifications.<n>We then employ a Multi-Agent Reinforcement Learning (MARL) method to simulate bots adversarial behavior.
arXiv Detail & Related papers (2025-10-16T02:41:49Z) - Boosting Bot Detection via Heterophily-Aware Representation Learning and Prototype-Guided Cluster Discovery [16.548403922027248]
BotHP is a generative Graph Self-Supervised Learning framework tailored to boost graph-based bot detectors.<n>It uses a dual-encoder architecture, consisting of a graph-aware encoder to capture node commonality and a graph-agnostic encoder to preserve node uniqueness.<n>It consistently boosts graph-based bot detectors, improving detection performance, alleviating label reliance, and enhancing generalization capability.
arXiv Detail & Related papers (2025-06-01T12:44:53Z) - Verifying message-passing neural networks via topology-based bounds tightening [3.3267518043390205]
We develop a computationally effective approach towards providing robust certificates for message-passing neural networks (MPNNs)
Because our work builds on mixed-integer optimization, it encodes a wide variety of subproblems.
We test on both node and graph classification problems and consider topological attacks that both add and remove edges.
arXiv Detail & Related papers (2024-02-21T17:05:27Z) - My Brother Helps Me: Node Injection Based Adversarial Attack on Social Bot Detection [69.99192868521564]
Social platforms such as Twitter are under siege from a multitude of fraudulent users.
Due to the structure of social networks, the majority of methods are based on the graph neural network(GNN), which is susceptible to attacks.
We propose a node injection-based adversarial attack method designed to deceive bot detection models.
arXiv Detail & Related papers (2023-10-11T03:09:48Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.