Robust Communicative Multi-Agent Reinforcement Learning with Active
Defense
- URL: http://arxiv.org/abs/2312.11545v1
- Date: Sat, 16 Dec 2023 09:02:56 GMT
- Title: Robust Communicative Multi-Agent Reinforcement Learning with Active
Defense
- Authors: Lebin Yu, Yunbo Qiu, Quanming Yao, Yuan Shen, Xudong Zhang and Jian
Wang
- Abstract summary: We propose an active defense strategy, where agents automatically reduce the impact of potentially harmful messages on the final decision.
We design an Active Defense Multi-Agent Communication framework (ADMAC), which estimates the reliability of received messages and adjusts their impact on the final decision.
The superiority of ADMAC over existing methods is validated by experiments in three communication-critical tasks under four types of attacks.
- Score: 38.6815513394882
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Communication in multi-agent reinforcement learning (MARL) has been proven to
effectively promote cooperation among agents recently. Since communication in
real-world scenarios is vulnerable to noises and adversarial attacks, it is
crucial to develop robust communicative MARL technique. However, existing
research in this domain has predominantly focused on passive defense
strategies, where agents receive all messages equally, making it hard to
balance performance and robustness. We propose an active defense strategy,
where agents automatically reduce the impact of potentially harmful messages on
the final decision. There are two challenges to implement this strategy, that
are defining unreliable messages and adjusting the unreliable messages' impact
on the final decision properly. To address them, we design an Active Defense
Multi-Agent Communication framework (ADMAC), which estimates the reliability of
received messages and adjusts their impact on the final decision accordingly
with the help of a decomposable decision structure. The superiority of ADMAC
over existing methods is validated by experiments in three
communication-critical tasks under four types of attacks.
Related papers
- T2MAC: Targeted and Trusted Multi-Agent Communication through Selective
Engagement and Evidence-Driven Integration [15.91335141803629]
We propose Targeted and Trusted Multi-Agent Communication (T2MAC) to help agents learn selective engagement and evidence-driven integration.
T2MAC enables agents to craft individualized messages, pinpoint ideal communication windows, and engage with reliable partners.
We evaluate our method on a diverse set of cooperative multi-agent tasks, with varying difficulties, involving different scales.
arXiv Detail & Related papers (2024-01-19T18:00:33Z) - Malicious Agent Detection for Robust Multi-Agent Collaborative Perception [52.261231738242266]
Multi-agent collaborative (MAC) perception is more vulnerable to adversarial attacks than single-agent perception.
We propose Malicious Agent Detection (MADE), a reactive defense specific to MAC perception.
We conduct comprehensive evaluations on a benchmark 3D dataset V2X-sim and a real-road dataset DAIR-V2X.
arXiv Detail & Related papers (2023-10-18T11:36:42Z) - Multi-Agent Reinforcement Learning Based on Representational
Communication for Large-Scale Traffic Signal Control [13.844458247041711]
Traffic signal control (TSC) is a challenging problem within intelligent transportation systems.
We propose a communication-based MARL framework for large-scale TSC.
Our framework allows each agent to learn a communication policy that dictates "which" part of the message is sent "to whom"
arXiv Detail & Related papers (2023-10-03T21:06:51Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Certifiably Robust Policy Learning against Adversarial Communication in
Multi-agent Systems [51.6210785955659]
Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions.
However, when deploying trained communicative agents in a real-world application where noise and potential attackers exist, the safety of communication-based policies becomes a severe issue that is underexplored.
In this work, we consider an environment with $N$ agents, where the attacker may arbitrarily change the communication from any $CfracN-12$ agents to a victim agent.
arXiv Detail & Related papers (2022-06-21T07:32:18Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - Mis-spoke or mis-lead: Achieving Robustness in Multi-Agent Communicative
Reinforcement Learning [37.24674549469648]
We make the first step towards conducting message attacks on MACRL methods.
We develop a defence method via message reconstruction.
We consider the ability of the malicious agent to adapt to the changing and improving defensive communicative policies.
arXiv Detail & Related papers (2021-08-09T04:41:47Z) - Adversarial Attacks On Multi-Agent Communication [80.4392160849506]
Modern autonomous systems will soon be deployed at scale, opening up the possibility for cooperative multi-agent systems.
Such advantages rely heavily on communication channels which have been shown to be vulnerable to security breaches.
In this paper, we explore such adversarial attacks in a novel multi-agent setting where agents communicate by sharing learned intermediate representations.
arXiv Detail & Related papers (2021-01-17T00:35:26Z) - Gaussian Process Based Message Filtering for Robust Multi-Agent
Cooperation in the Presence of Adversarial Communication [5.161531917413708]
We consider the problem of providing robustness to adversarial communication in multi-agent systems.
We propose a communication architecture based on Graph Neural Networks (GNNs)
We show that our filtering method is able to reduce the impact that non-cooperative agents cause.
arXiv Detail & Related papers (2020-12-01T14:21:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.