Attacking Cooperative Multi-Agent Reinforcement Learning by Adversarial
Minority Influence
- URL: http://arxiv.org/abs/2302.03322v2
- Date: Sun, 11 Jun 2023 06:16:30 GMT
- Title: Attacking Cooperative Multi-Agent Reinforcement Learning by Adversarial
Minority Influence
- Authors: Simin Li, Jun Guo, Jingqiao Xiu, Pu Feng, Xin Yu, Aishan Liu, Wenjun
Wu, Xianglong Liu
- Abstract summary: Adrial Minority Influence (AMI) is a practical black-box attack and can be launched without knowing victim parameters.
AMI is also strong by considering the complex multi-agent interaction and the cooperative goal of agents.
We achieve the first successful attack against real-world robot swarms and effectively fool agents in simulated environments into collectively worst-case scenarios.
- Score: 57.154716042854034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study probes the vulnerabilities of cooperative multi-agent
reinforcement learning (c-MARL) under adversarial attacks, a critical
determinant of c-MARL's worst-case performance prior to real-world
implementation. Current observation-based attacks, constrained by white-box
assumptions, overlook c-MARL's complex multi-agent interactions and cooperative
objectives, resulting in impractical and limited attack capabilities. To
address these shortcomes, we propose Adversarial Minority Influence (AMI), a
practical and strong for c-MARL. AMI is a practical black-box attack and can be
launched without knowing victim parameters. AMI is also strong by considering
the complex multi-agent interaction and the cooperative goal of agents,
enabling a single adversarial agent to unilaterally misleads majority victims
to form targeted worst-case cooperation. This mirrors minority influence
phenomena in social psychology. To achieve maximum deviation in victim policies
under complex agent-wise interactions, our unilateral attack aims to
characterize and maximize the impact of the adversary on the victims. This is
achieved by adapting a unilateral agent-wise relation metric derived from
mutual information, thereby mitigating the adverse effects of victim influence
on the adversary. To lead the victims into a jointly detrimental scenario, our
targeted attack deceives victims into a long-term, cooperatively harmful
situation by guiding each victim towards a specific target, determined through
a trial-and-error process executed by a reinforcement learning agent. Through
AMI, we achieve the first successful attack against real-world robot swarms and
effectively fool agents in simulated environments into collectively worst-case
scenarios, including Starcraft II and Multi-agent Mujoco. The source code and
demonstrations can be found at: https://github.com/DIG-Beihang/AMI.
Related papers
- CuDA2: An approach for Incorporating Traitor Agents into Cooperative Multi-Agent Systems [13.776447110639193]
We introduce a novel method that involves injecting traitor agents into the CMARL system.
In TMDP, traitors are trained using the same MARL algorithm as the victim agents, with their reward function set as the negative of the victim agents' reward.
CuDA2 enhances the efficiency and aggressiveness of attacks on the specified victim agents' policies.
arXiv Detail & Related papers (2024-06-25T09:59:31Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - SUB-PLAY: Adversarial Policies against Partially Observed Multi-Agent Reinforcement Learning Systems [40.91476827978885]
Attackers can rapidly exploit the victim's vulnerabilities, generating adversarial policies that result in the failure of specific tasks.
We propose a novel black-box attack ( SUB-PLAY) that incorporates the concept of constructing multiple subgames to mitigate the impact of partial observability.
We evaluate three potential defenses aimed at exploring ways to mitigate security threats posed by adversarial policies.
arXiv Detail & Related papers (2024-02-06T06:18:16Z) - Adversarial Attacks on Cooperative Multi-agent Bandits [41.79235070291252]
We study adversarial attacks on CMA2B in both homogeneous and heterogeneous settings.
In the homogeneous setting, we propose attack strategies that convince all agents to select a particular target arm $T-o(T)$ times while incurring $o(T)$ attack costs in $T$ rounds.
In the heterogeneous setting, we prove that a target arm attack requires linear attack costs and propose attack strategies that can force a maximum number of agents to suffer linear regrets while incurring sublinear costs and manipulating only the observations of a few target agents.
arXiv Detail & Related papers (2023-11-03T04:03:19Z) - Malicious Agent Detection for Robust Multi-Agent Collaborative Perception [52.261231738242266]
Multi-agent collaborative (MAC) perception is more vulnerable to adversarial attacks than single-agent perception.
We propose Malicious Agent Detection (MADE), a reactive defense specific to MAC perception.
We conduct comprehensive evaluations on a benchmark 3D dataset V2X-sim and a real-road dataset DAIR-V2X.
arXiv Detail & Related papers (2023-10-18T11:36:42Z) - Efficient Adversarial Attacks on Online Multi-agent Reinforcement
Learning [45.408568528354216]
We investigate the impact of adversarial attacks on multi-agent reinforcement learning (MARL)
In the considered setup, there is an attacker who is able to modify the rewards before the agents receive them or manipulate the actions before the environment receives them.
We show that the mixed attack strategy can efficiently attack MARL agents even if the attacker has no prior information about the underlying environment and the agents' algorithms.
arXiv Detail & Related papers (2023-07-15T00:38:55Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - On the Robustness of Cooperative Multi-Agent Reinforcement Learning [32.92198917228515]
In cooperative multi-agent reinforcement learning (c-MARL), agents learn to cooperatively take actions as a team to maximize a total team reward.
We analyze the robustness of c-MARL to adversaries capable of attacking one of the agents on a team.
By attacking a single agent, our attack method has highly negative impact on the overall team reward, reducing it from 20 to 9.4.
arXiv Detail & Related papers (2020-03-08T05:12:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.