Enhancing Multi-Agent Coordination through Common Operating Picture
Integration
- URL: http://arxiv.org/abs/2311.04740v1
- Date: Wed, 8 Nov 2023 15:08:55 GMT
- Title: Enhancing Multi-Agent Coordination through Common Operating Picture
Integration
- Authors: Peihong Yu, Bhoram Lee, Aswin Raghavan, Supun Samarasekara, Pratap
Tokekar, James Zachary Hare
- Abstract summary: We present an approach to multi-agent coordination, where each agent is equipped with the capability to integrate its history of observations, actions and messages received into a Common Operating Picture (COP)
Our results demonstrate the efficacy of COP integration, and show that COP-based training leads to robust policies compared to state-of-the-art Multi-Agent Reinforcement Learning (MARL) methods when faced with out-of-distribution initial states.
- Score: 14.927199437011044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In multi-agent systems, agents possess only local observations of the
environment. Communication between teammates becomes crucial for enhancing
coordination. Past research has primarily focused on encoding local information
into embedding messages which are unintelligible to humans. We find that using
these messages in agent's policy learning leads to brittle policies when tested
on out-of-distribution initial states. We present an approach to multi-agent
coordination, where each agent is equipped with the capability to integrate its
(history of) observations, actions and messages received into a Common
Operating Picture (COP) and disseminate the COP. This process takes into
account the dynamic nature of the environment and the shared mission. We
conducted experiments in the StarCraft2 environment to validate our approach.
Our results demonstrate the efficacy of COP integration, and show that
COP-based training leads to robust policies compared to state-of-the-art
Multi-Agent Reinforcement Learning (MARL) methods when faced with
out-of-distribution initial states.
Related papers
- DCMAC: Demand-aware Customized Multi-Agent Communication via Upper Bound Training [9.068971933560416]
We propose a Demand-aware Customized Multi-Agent Communication protocol, which use an upper bound training to obtain the ideal policy.
Experimental results reveal that DCMAC significantly outperforms the baseline algorithms in both unconstrained and communication constrained scenarios.
arXiv Detail & Related papers (2024-09-11T09:23:27Z) - T2MAC: Targeted and Trusted Multi-Agent Communication through Selective
Engagement and Evidence-Driven Integration [15.91335141803629]
We propose Targeted and Trusted Multi-Agent Communication (T2MAC) to help agents learn selective engagement and evidence-driven integration.
T2MAC enables agents to craft individualized messages, pinpoint ideal communication windows, and engage with reliable partners.
We evaluate our method on a diverse set of cooperative multi-agent tasks, with varying difficulties, involving different scales.
arXiv Detail & Related papers (2024-01-19T18:00:33Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - Learning Reward Machines in Cooperative Multi-Agent Tasks [75.79805204646428]
This paper presents a novel approach to Multi-Agent Reinforcement Learning (MARL)
It combines cooperative task decomposition with the learning of reward machines (RMs) encoding the structure of the sub-tasks.
The proposed method helps deal with the non-Markovian nature of the rewards in partially observable environments.
arXiv Detail & Related papers (2023-03-24T15:12:28Z) - Centralized Training with Hybrid Execution in Multi-Agent Reinforcement
Learning [7.163485179361718]
We introduce hybrid execution in multi-agent reinforcement learning (MARL)
MARL is a new paradigm in which agents aim to successfully complete cooperative tasks with arbitrary communication levels at execution time.
We contribute MARO, an approach that makes use of an auto-regressive predictive model, trained in a centralized manner, to estimate missing agents' observations.
arXiv Detail & Related papers (2022-10-12T14:58:32Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - Depthwise Convolution for Multi-Agent Communication with Enhanced
Mean-Field Approximation [9.854975702211165]
We propose a new method based on local communication learning to tackle the multi-agent RL (MARL) challenge.
First, we design a new communication protocol that exploits the ability of depthwise convolution to efficiently extract local relations.
Second, we introduce the mean-field approximation into our method to reduce the scale of agent interactions.
arXiv Detail & Related papers (2022-03-06T07:42:43Z) - Inference-Based Deterministic Messaging For Multi-Agent Communication [1.8275108630751844]
We study learning in matrix-based signaling games to show that decentralized methods can converge to a suboptimal policy.
We then propose a modification to the messaging policy, in which the sender deterministically chooses the best message that helps the receiver to infer the sender's observation.
arXiv Detail & Related papers (2021-03-03T03:09:22Z) - Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent
Populations [59.608216900601384]
We study agents that learn to communicate via actuating their joints in a 3D environment.
We show that under realistic assumptions, a non-uniform distribution of intents and a common-knowledge energy cost, these agents can find protocols that generalize to novel partners.
arXiv Detail & Related papers (2020-10-29T19:23:10Z) - Multi-Agent Interactions Modeling with Correlated Policies [53.38338964628494]
In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework.
We develop a Decentralized Adrial Imitation Learning algorithm with Correlated policies (CoDAIL)
Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators.
arXiv Detail & Related papers (2020-01-04T17:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.