Adversarial attacks in consensus-based multi-agent reinforcement
learning
- URL: http://arxiv.org/abs/2103.06967v1
- Date: Thu, 11 Mar 2021 21:44:18 GMT
- Title: Adversarial attacks in consensus-based multi-agent reinforcement
learning
- Authors: Martin Figura, Krishna Chaitanya Kosaraju, and Vijay Gupta
- Abstract summary: We show that an adversarial agent can persuade all the other agents in the network to implement policies that optimize an objective that it desires.
In this sense, the standard consensus-based MARL algorithms are fragile to attacks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, many cooperative distributed multi-agent reinforcement learning
(MARL) algorithms have been proposed in the literature. In this work, we study
the effect of adversarial attacks on a network that employs a consensus-based
MARL algorithm. We show that an adversarial agent can persuade all the other
agents in the network to implement policies that optimize an objective that it
desires. In this sense, the standard consensus-based MARL algorithms are
fragile to attacks.
Related papers
- Enhancing the Robustness of QMIX against State-adversarial Attacks [6.627954554805906]
We discuss four techniques to improve the robustness of SARL algorithms and extend them to multi-agent scenarios.
We train models using a variety of attacks in this research.
We then test the models taught using the other attacks by subjecting them to the corresponding attacks throughout the training phase.
arXiv Detail & Related papers (2023-07-03T10:10:34Z) - Cooperation or Competition: Avoiding Player Domination for Multi-Target
Robustness via Adaptive Budgets [76.20705291443208]
We view adversarial attacks as a bargaining game in which different players negotiate to reach an agreement on a joint direction of parameter updating.
We design a novel framework that adjusts the budgets of different adversaries to avoid any player dominance.
Experiments on standard benchmarks show that employing the proposed framework to the existing approaches significantly advances multi-target robustness.
arXiv Detail & Related papers (2023-06-27T14:02:10Z) - Context-Aware Bayesian Network Actor-Critic Methods for Cooperative
Multi-Agent Reinforcement Learning [7.784991832712813]
We introduce a Bayesian network to inaugurate correlations between agents' action selections in their joint policy.
We develop practical algorithms to learn the context-aware Bayesian network policies.
Empirical results on a range of MARL benchmarks show the benefits of our approach.
arXiv Detail & Related papers (2023-06-02T21:22:27Z) - An Algorithm For Adversary Aware Decentralized Networked MARL [0.0]
We introduce vulnerabilities in the consensus updates of existing MARL algorithms.
We provide an algorithm that allows non-adversarial agents to reach a consensus in the presence of adversaries.
arXiv Detail & Related papers (2023-05-09T16:02:31Z) - MAVIPER: Learning Decision Tree Policies for Interpretable Multi-Agent
Reinforcement Learning [38.77840067555711]
We propose the first set of interpretable MARL algorithms that extract decision-tree policies from neural networks trained with MARL.
The first algorithm, IVIPER, extends VIPER, a recent method for single-agent interpretable RL, to the multi-agent setting.
To better capture coordination between agents, we propose a novel centralized decision-tree training algorithm, MAVIPER.
arXiv Detail & Related papers (2022-05-25T02:38:10Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Emergence of Theory of Mind Collaboration in Multiagent Systems [65.97255691640561]
We propose an adaptive training algorithm to develop effective collaboration between agents with ToM.
We evaluate our algorithms with two games, where our algorithm surpasses all previous decentralized execution algorithms without modeling ToM.
arXiv Detail & Related papers (2021-09-30T23:28:00Z) - Multi-Task Federated Reinforcement Learning with Adversaries [2.6080102941802106]
Reinforcement learning algorithms pose a serious threat from adversaries.
In this paper, we analyze the Multi-task Federated Reinforcement Learning algorithms.
We propose an adaptive attack method with better attack performance.
arXiv Detail & Related papers (2021-03-11T05:39:52Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z) - Adversarial Augmentation Policy Search for Domain and Cross-Lingual
Generalization in Reading Comprehension [96.62963688510035]
Reading comprehension models often overfit to nuances of training datasets and fail at adversarial evaluation.
We present several effective adversaries and automated data augmentation policy search methods with the goal of making reading comprehension models more robust to adversarial evaluation.
arXiv Detail & Related papers (2020-04-13T17:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.