Multi-Agent Sequential Decision-Making via Communication
- URL: http://arxiv.org/abs/2209.12713v1
- Date: Mon, 26 Sep 2022 14:08:03 GMT
- Title: Multi-Agent Sequential Decision-Making via Communication
- Authors: Ziluo Ding, Kefan Su, Weixin Hong, Liwen Zhu, Tiejun Huang, and
Zongqing Lu
- Abstract summary: We propose a novel communication scheme, Sequential Communication (SeqComm)
In negotiation phase, agents determine the priority of decision-making by communicating hidden states of observations.
In launching phase, the upper-level agents take the lead in making decisions and communicate their actions with the lower-level agents.
- Score: 27.465335930802453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Communication helps agents to obtain information about others so that better
coordinated behavior can be learned. Some existing work communicates predicted
future trajectory with others, hoping to get clues about what others would do
for better coordination. However, circular dependencies sometimes can occur
when agents are treated synchronously so it is hard to coordinate
decision-making. In this paper, we propose a novel communication scheme,
Sequential Communication (SeqComm). SeqComm treats agents asynchronously (the
upper-level agents make decisions before the lower-level ones) and has two
communication phases. In negotiation phase, agents determine the priority of
decision-making by communicating hidden states of observations and comparing
the value of intention, which is obtained by modeling the environment dynamics.
In launching phase, the upper-level agents take the lead in making decisions
and communicate their actions with the lower-level agents. Theoretically, we
prove the policies learned by SeqComm are guaranteed to improve monotonically
and converge. Empirically, we show that SeqComm outperforms existing methods in
various multi-agent cooperative tasks.
Related papers
- Towards Collaborative Intelligence: Propagating Intentions and Reasoning for Multi-Agent Coordination with Large Language Models [41.95288786980204]
Current agent frameworks often suffer from dependencies on single-agent execution and lack robust inter- module communication.
We present a framework for training large language models as collaborative agents to enable coordinated behaviors in cooperative MARL.
A propagation network transforms broadcast intentions into teammate-specific communication messages, sharing relevant goals with designated teammates.
arXiv Detail & Related papers (2024-07-17T13:14:00Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Centralized Training with Hybrid Execution in Multi-Agent Reinforcement
Learning [7.163485179361718]
We introduce hybrid execution in multi-agent reinforcement learning (MARL)
MARL is a new paradigm in which agents aim to successfully complete cooperative tasks with arbitrary communication levels at execution time.
We contribute MARO, an approach that makes use of an auto-regressive predictive model, trained in a centralized manner, to estimate missing agents' observations.
arXiv Detail & Related papers (2022-10-12T14:58:32Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - Distributed Adaptive Learning Under Communication Constraints [54.22472738551687]
This work examines adaptive distributed learning strategies designed to operate under communication constraints.
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data.
arXiv Detail & Related papers (2021-12-03T19:23:48Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Learning Selective Communication for Multi-Agent Path Finding [18.703918339797283]
Decision Causal Communication (DCC) is a simple yet efficient model to enable agents to select neighbors to conduct communication.
DCC is suitable for decentralized execution to handle large scale problems.
arXiv Detail & Related papers (2021-09-12T03:07:20Z) - Inference-Based Deterministic Messaging For Multi-Agent Communication [1.8275108630751844]
We study learning in matrix-based signaling games to show that decentralized methods can converge to a suboptimal policy.
We then propose a modification to the messaging policy, in which the sender deterministically chooses the best message that helps the receiver to infer the sender's observation.
arXiv Detail & Related papers (2021-03-03T03:09:22Z) - Learning Individually Inferred Communication for Multi-Agent Cooperation [37.56115000150748]
We propose Individually Inferred Communication (I2C) to enable agents to learn a prior for agent-agent communication.
The prior knowledge is learned via causal inference and realized by a feed-forward neural network.
I2C can not only reduce communication overhead but also improve the performance in a variety of multi-agent cooperative scenarios.
arXiv Detail & Related papers (2020-06-11T14:07:57Z) - On Emergent Communication in Competitive Multi-Agent Teams [116.95067289206919]
We investigate whether competition for performance from an external, similar agent team could act as a social influence.
Our results show that an external competitive influence leads to improved accuracy and generalization, as well as faster emergence of communicative languages.
arXiv Detail & Related papers (2020-03-04T01:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.