Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical
Reinforcement Learning
- URL: http://arxiv.org/abs/2110.10655v1
- Date: Wed, 20 Oct 2021 16:49:26 GMT
- Title: Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical
Reinforcement Learning
- Authors: Thai Le, Long Tran-Thanh, Dongwon Lee
- Abstract summary: We show that it is possible for adversaries to exploit computational learning mechanism such as reinforcement learning (RL) to maximize the influence of socialbots while avoiding being detected.
Our proposed policy networks train with a vast amount of synthetic graphs and generalize better than baselines on unseen real-life graphs.
This makes our approach a practical adversarial attack when deployed in a real-life setting.
- Score: 31.33996447671789
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Socialbots are software-driven user accounts on social platforms, acting
autonomously (mimicking human behavior), with the aims to influence the
opinions of other users or spread targeted misinformation for particular goals.
As socialbots undermine the ecosystem of social platforms, they are often
considered harmful. As such, there have been several computational efforts to
auto-detect the socialbots. However, to our best knowledge, the adversarial
nature of these socialbots has not yet been studied. This begs a question "can
adversaries, controlling socialbots, exploit AI techniques to their advantage?"
To this question, we successfully demonstrate that indeed it is possible for
adversaries to exploit computational learning mechanism such as reinforcement
learning (RL) to maximize the influence of socialbots while avoiding being
detected. We first formulate the adversarial socialbot learning as a
cooperative game between two functional hierarchical RL agents. While one agent
curates a sequence of activities that can avoid the detection, the other agent
aims to maximize network influence by selectively connecting with right users.
Our proposed policy networks train with a vast amount of synthetic graphs and
generalize better than baselines on unseen real-life graphs both in terms of
maximizing network influence (up to +18%) and sustainable stealthiness (up to
+40% undetectability) under a strong bot detector (with 90% detection
accuracy). During inference, the complexity of our approach scales linearly,
independent of a network's structure and the virality of news. This makes our
approach a practical adversarial attack when deployed in a real-life setting.
Related papers
- Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - SeBot: Structural Entropy Guided Multi-View Contrastive Learning for Social Bot Detection [34.68635583099056]
We propose SEBot, a novel multi-view graph-based contrastive learning-enabled social bot detector.
In particular, we use structural entropy as an uncertainty metric to optimize the entire graph's structure.
And we design an encoder to enable message passing beyond the homophily assumption.
arXiv Detail & Related papers (2024-05-18T08:16:11Z) - Adversarial Botometer: Adversarial Analysis for Social Bot Detection [1.9280536006736573]
Social bots produce content that mimics human creativity.
Malicious social bots emerge to deceive people with their unrealistic content.
We evaluate the behavior of a text-based bot detector in a competitive environment.
arXiv Detail & Related papers (2024-05-03T11:28:21Z) - User-Like Bots for Cognitive Automation: A Survey [4.075971633195745]
Despite the hype, bots with human user-like cognition do not currently exist.
They lack situational awareness on the digital platforms where they operate.
We discuss how cognitive architectures can contribute to creating intelligent software bots.
arXiv Detail & Related papers (2023-11-20T20:16:24Z) - Learning Vision-based Pursuit-Evasion Robot Policies [54.52536214251999]
We develop a fully-observable robot policy that generates supervision for a partially-observable one.
We deploy our policy on a physical quadruped robot with an RGB-D camera on pursuit-evasion interactions in the wild.
arXiv Detail & Related papers (2023-08-30T17:59:05Z) - SACSoN: Scalable Autonomous Control for Social Navigation [62.59274275261392]
We develop methods for training policies for socially unobtrusive navigation.
By minimizing this counterfactual perturbation, we can induce robots to behave in ways that do not alter the natural behavior of humans in the shared space.
We collect a large dataset where an indoor mobile robot interacts with human bystanders.
arXiv Detail & Related papers (2023-06-02T19:07:52Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Intention Aware Robot Crowd Navigation with Attention-Based Interaction
Graph [3.8461692052415137]
We study the problem of safe and intention-aware robot navigation in dense and interactive crowds.
We propose a novel recurrent graph neural network with attention mechanisms to capture heterogeneous interactions among agents.
We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios.
arXiv Detail & Related papers (2022-03-03T16:26:36Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.