Certifiably Robust Policy Learning against Adversarial Communication in
Multi-agent Systems
- URL: http://arxiv.org/abs/2206.10158v1
- Date: Tue, 21 Jun 2022 07:32:18 GMT
- Title: Certifiably Robust Policy Learning against Adversarial Communication in
Multi-agent Systems
- Authors: Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil
Feizi, Sumitra Ganesh, Furong Huang
- Abstract summary: Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions.
However, when deploying trained communicative agents in a real-world application where noise and potential attackers exist, the safety of communication-based policies becomes a severe issue that is underexplored.
In this work, we consider an environment with $N$ agents, where the attacker may arbitrarily change the communication from any $CfracN-12$ agents to a victim agent.
- Score: 51.6210785955659
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Communication is important in many multi-agent reinforcement learning (MARL)
problems for agents to share information and make good decisions. However, when
deploying trained communicative agents in a real-world application where noise
and potential attackers exist, the safety of communication-based policies
becomes a severe issue that is underexplored. Specifically, if communication
messages are manipulated by malicious attackers, agents relying on
untrustworthy communication may take unsafe actions that lead to catastrophic
consequences. Therefore, it is crucial to ensure that agents will not be misled
by corrupted communication, while still benefiting from benign communication.
In this work, we consider an environment with $N$ agents, where the attacker
may arbitrarily change the communication from any $C<\frac{N-1}{2}$ agents to a
victim agent. For this strong threat model, we propose a certifiable defense by
constructing a message-ensemble policy that aggregates multiple randomly
ablated message sets. Theoretical analysis shows that this message-ensemble
policy can utilize benign communication while being certifiably robust to
adversarial communication, regardless of the attacking algorithm. Experiments
in multiple environments verify that our defense significantly improves the
robustness of trained policies against various types of attacks.
Related papers
- Toward Mixture-of-Experts Enabled Trustworthy Semantic Communication for 6G Networks [82.3753728955968]
We introduce a novel Mixture-of-Experts (MoE)-based SemCom system.
This system comprises a gating network and multiple experts, each specializing in different security challenges.
The gating network adaptively selects suitable experts to counter heterogeneous attacks based on user-defined security requirements.
A case study in vehicular networks demonstrates the efficacy of the MoE-based SemCom system.
arXiv Detail & Related papers (2024-09-24T03:17:51Z) - Secure Semantic Communication via Paired Adversarial Residual Networks [59.468221305630784]
This letter explores the positive side of the adversarial attack for the security-aware semantic communication system.
A pair of matching pluggable modules is installed: one after the semantic transmitter and the other before the semantic receiver.
The proposed scheme is capable of fooling the eavesdropper while maintaining the high-quality semantic communication.
arXiv Detail & Related papers (2024-07-02T08:32:20Z) - Robust Communicative Multi-Agent Reinforcement Learning with Active
Defense [38.6815513394882]
We propose an active defense strategy, where agents automatically reduce the impact of potentially harmful messages on the final decision.
We design an Active Defense Multi-Agent Communication framework (ADMAC), which estimates the reliability of received messages and adjusts their impact on the final decision.
The superiority of ADMAC over existing methods is validated by experiments in three communication-critical tasks under four types of attacks.
arXiv Detail & Related papers (2023-12-16T09:02:56Z) - Multi-Agent Reinforcement Learning Based on Representational
Communication for Large-Scale Traffic Signal Control [13.844458247041711]
Traffic signal control (TSC) is a challenging problem within intelligent transportation systems.
We propose a communication-based MARL framework for large-scale TSC.
Our framework allows each agent to learn a communication policy that dictates "which" part of the message is sent "to whom"
arXiv Detail & Related papers (2023-10-03T21:06:51Z) - AC2C: Adaptively Controlled Two-Hop Communication for Multi-Agent
Reinforcement Learning [4.884877440051105]
We propose a novel communication protocol called Adaptively Controlled Two-Hop Communication (AC2C)
AC2C employs an adaptive two-hop communication strategy to enable long-range information exchange among agents to boost performance.
We evaluate AC2C on three cooperative multi-agent tasks, and the experimental results show that it outperforms relevant baselines with lower communication costs.
arXiv Detail & Related papers (2023-02-24T09:00:34Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - Adversarial Attacks On Multi-Agent Communication [80.4392160849506]
Modern autonomous systems will soon be deployed at scale, opening up the possibility for cooperative multi-agent systems.
Such advantages rely heavily on communication channels which have been shown to be vulnerable to security breaches.
In this paper, we explore such adversarial attacks in a novel multi-agent setting where agents communicate by sharing learned intermediate representations.
arXiv Detail & Related papers (2021-01-17T00:35:26Z) - Learning Individually Inferred Communication for Multi-Agent Cooperation [37.56115000150748]
We propose Individually Inferred Communication (I2C) to enable agents to learn a prior for agent-agent communication.
The prior knowledge is learned via causal inference and realized by a feed-forward neural network.
I2C can not only reduce communication overhead but also improve the performance in a variety of multi-agent cooperative scenarios.
arXiv Detail & Related papers (2020-06-11T14:07:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.