AC2C: Adaptively Controlled Two-Hop Communication for Multi-Agent
Reinforcement Learning
- URL: http://arxiv.org/abs/2302.12515v2
- Date: Tue, 23 May 2023 12:59:56 GMT
- Title: AC2C: Adaptively Controlled Two-Hop Communication for Multi-Agent
Reinforcement Learning
- Authors: Xuefeng Wang, Xinran Li, Jiawei Shao and Jun Zhang
- Abstract summary: We propose a novel communication protocol called Adaptively Controlled Two-Hop Communication (AC2C)
AC2C employs an adaptive two-hop communication strategy to enable long-range information exchange among agents to boost performance.
We evaluate AC2C on three cooperative multi-agent tasks, and the experimental results show that it outperforms relevant baselines with lower communication costs.
- Score: 4.884877440051105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning communication strategies in cooperative multi-agent reinforcement
learning (MARL) has recently attracted intensive attention. Early studies
typically assumed a fully-connected communication topology among agents, which
induces high communication costs and may not be feasible. Some recent works
have developed adaptive communication strategies to reduce communication
overhead, but these methods cannot effectively obtain valuable information from
agents that are beyond the communication range. In this paper, we consider a
realistic communication model where each agent has a limited communication
range, and the communication topology dynamically changes. To facilitate
effective agent communication, we propose a novel communication protocol called
Adaptively Controlled Two-Hop Communication (AC2C). After an initial local
communication round, AC2C employs an adaptive two-hop communication strategy to
enable long-range information exchange among agents to boost performance, which
is implemented by a communication controller. This controller determines
whether each agent should ask for two-hop messages and thus helps to reduce the
communication overhead during distributed execution. We evaluate AC2C on three
cooperative multi-agent tasks, and the experimental results show that it
outperforms relevant baselines with lower communication costs.
Related papers
- Pragmatic Communication in Multi-Agent Collaborative Perception [80.14322755297788]
Collaborative perception results in a trade-off between perception ability and communication costs.
We propose PragComm, a multi-agent collaborative perception system with two key components.
PragComm consistently outperforms previous methods with more than 32.7K times lower communication volume.
arXiv Detail & Related papers (2024-01-23T11:58:08Z) - Context-aware Communication for Multi-agent Reinforcement Learning [6.109127175562235]
We develop a context-aware communication scheme for multi-agent reinforcement learning (MARL)
In the first stage, agents exchange coarse representations in a broadcast fashion, providing context for the second stage.
Following this, agents utilize attention mechanisms in the second stage to selectively generate messages personalized for the receivers.
To evaluate the effectiveness of CACOM, we integrate it with both actor-critic and value-based MARL algorithms.
arXiv Detail & Related papers (2023-12-25T03:33:08Z) - Multi-Agent Reinforcement Learning Based on Representational
Communication for Large-Scale Traffic Signal Control [13.844458247041711]
Traffic signal control (TSC) is a challenging problem within intelligent transportation systems.
We propose a communication-based MARL framework for large-scale TSC.
Our framework allows each agent to learn a communication policy that dictates "which" part of the message is sent "to whom"
arXiv Detail & Related papers (2023-10-03T21:06:51Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - Certifiably Robust Policy Learning against Adversarial Communication in
Multi-agent Systems [51.6210785955659]
Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions.
However, when deploying trained communicative agents in a real-world application where noise and potential attackers exist, the safety of communication-based policies becomes a severe issue that is underexplored.
In this work, we consider an environment with $N$ agents, where the attacker may arbitrarily change the communication from any $CfracN-12$ agents to a victim agent.
arXiv Detail & Related papers (2022-06-21T07:32:18Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Learning to Communicate Using Counterfactual Reasoning [2.8110705488739676]
This paper introduces the novel multi-agent counterfactual communication learning (MACC) method.
MACC adapts counterfactual reasoning in order to overcome the credit assignment problem for communicating agents.
Our experiments show that MACC is able to outperform the state-of-the-art baselines in four different scenarios in the Particle environment.
arXiv Detail & Related papers (2020-06-12T14:02:04Z) - Learning Individually Inferred Communication for Multi-Agent Cooperation [37.56115000150748]
We propose Individually Inferred Communication (I2C) to enable agents to learn a prior for agent-agent communication.
The prior knowledge is learned via causal inference and realized by a feed-forward neural network.
I2C can not only reduce communication overhead but also improve the performance in a variety of multi-agent cooperative scenarios.
arXiv Detail & Related papers (2020-06-11T14:07:57Z) - Learning Structured Communication for Multi-agent Reinforcement Learning [104.64584573546524]
This work explores the large-scale multi-agent communication mechanism under a multi-agent reinforcement learning (MARL) setting.
We propose a novel framework termed as Learning Structured Communication (LSC) by using a more flexible and efficient communication topology.
arXiv Detail & Related papers (2020-02-11T07:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.