Adversarial Attacks on Cooperative Multi-agent Bandits
- URL: http://arxiv.org/abs/2311.01698v1
- Date: Fri, 3 Nov 2023 04:03:19 GMT
- Title: Adversarial Attacks on Cooperative Multi-agent Bandits
- Authors: Jinhang Zuo, Zhiyao Zhang, Xuchuang Wang, Cheng Chen, Shuai Li, John
C.S. Lui, Mohammad Hajiesmaili, Adam Wierman
- Abstract summary: We study adversarial attacks on CMA2B in both homogeneous and heterogeneous settings.
In the homogeneous setting, we propose attack strategies that convince all agents to select a particular target arm $T-o(T)$ times while incurring $o(T)$ attack costs in $T$ rounds.
In the heterogeneous setting, we prove that a target arm attack requires linear attack costs and propose attack strategies that can force a maximum number of agents to suffer linear regrets while incurring sublinear costs and manipulating only the observations of a few target agents.
- Score: 41.79235070291252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cooperative multi-agent multi-armed bandits (CMA2B) consider the
collaborative efforts of multiple agents in a shared multi-armed bandit game.
We study latent vulnerabilities exposed by this collaboration and consider
adversarial attacks on a few agents with the goal of influencing the decisions
of the rest. More specifically, we study adversarial attacks on CMA2B in both
homogeneous settings, where agents operate with the same arm set, and
heterogeneous settings, where agents have distinct arm sets. In the homogeneous
setting, we propose attack strategies that, by targeting just one agent,
convince all agents to select a particular target arm $T-o(T)$ times while
incurring $o(T)$ attack costs in $T$ rounds. In the heterogeneous setting, we
prove that a target arm attack requires linear attack costs and propose attack
strategies that can force a maximum number of agents to suffer linear regrets
while incurring sublinear costs and only manipulating the observations of a few
target agents. Numerical experiments validate the effectiveness of our proposed
attack strategies.
Related papers
- CuDA2: An approach for Incorporating Traitor Agents into Cooperative Multi-Agent Systems [13.776447110639193]
We introduce a novel method that involves injecting traitor agents into the CMARL system.
In TMDP, traitors are trained using the same MARL algorithm as the victim agents, with their reward function set as the negative of the victim agents' reward.
CuDA2 enhances the efficiency and aggressiveness of attacks on the specified victim agents' policies.
arXiv Detail & Related papers (2024-06-25T09:59:31Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Optimal Cost Constrained Adversarial Attacks For Multiple Agent Systems [6.69087470775851]
We formulate the problem of performing optimal adversarial agent-to-agent attacks using distributed attack agents.
We propose an optimal method integrating within-step static constrained attack-resource allocation optimization and between-step dynamic programming.
Our numerical results show that the proposed attacks can significantly reduce the rewards received by the attacked agents.
arXiv Detail & Related papers (2023-11-01T21:28:02Z) - Malicious Agent Detection for Robust Multi-Agent Collaborative Perception [52.261231738242266]
Multi-agent collaborative (MAC) perception is more vulnerable to adversarial attacks than single-agent perception.
We propose Malicious Agent Detection (MADE), a reactive defense specific to MAC perception.
We conduct comprehensive evaluations on a benchmark 3D dataset V2X-sim and a real-road dataset DAIR-V2X.
arXiv Detail & Related papers (2023-10-18T11:36:42Z) - Cooperation or Competition: Avoiding Player Domination for Multi-Target
Robustness via Adaptive Budgets [76.20705291443208]
We view adversarial attacks as a bargaining game in which different players negotiate to reach an agreement on a joint direction of parameter updating.
We design a novel framework that adjusts the budgets of different adversaries to avoid any player dominance.
Experiments on standard benchmarks show that employing the proposed framework to the existing approaches significantly advances multi-target robustness.
arXiv Detail & Related papers (2023-06-27T14:02:10Z) - Robust multi-agent coordination via evolutionary generation of auxiliary
adversarial attackers [23.15190337027283]
We propose Robust Multi-Agent Coordination via Generation of Auxiliary Adversarial Attackers (ROMANCE)
ROMANCE enables the trained policy to encounter diversified and strong auxiliary adversarial attacks during training, thus achieving high robustness under various policy perturbations.
The goal of quality is to minimize the ego-system coordination effect, and a novel diversity regularizer is applied to diversify the behaviors among attackers.
arXiv Detail & Related papers (2023-05-10T05:29:47Z) - Attacking Cooperative Multi-Agent Reinforcement Learning by Adversarial Minority Influence [41.14664289570607]
Adrial Minority Influence (AMI) is a practical black-box attack and can be launched without knowing victim parameters.
AMI is also strong by considering the complex multi-agent interaction and the cooperative goal of agents.
We achieve the first successful attack against real-world robot swarms and effectively fool agents in simulated environments into collectively worst-case scenarios.
arXiv Detail & Related papers (2023-02-07T08:54:37Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.