R-MADDPG for Partially Observable Environments and Limited Communication
- URL: http://arxiv.org/abs/2002.06684v2
- Date: Tue, 18 Feb 2020 02:55:30 GMT
- Title: R-MADDPG for Partially Observable Environments and Limited Communication
- Authors: Rose E. Wang, Michael Everett, Jonathan P. How
- Abstract summary: This paper introduces a deep recurrent multiagent actor-critic framework (R-MADDPG) for handling multiagent coordination under partial observable set-tings and limited communication.
We demonstrate that the resulting framework learns time dependencies for sharing missing observations, handling resource limitations, and developing different communication patterns among agents.
- Score: 42.771013165298186
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There are several real-world tasks that would benefit from applying
multiagent reinforcement learning (MARL) algorithms, including the coordination
among self-driving cars. The real world has challenging conditions for
multiagent learning systems, such as its partial observable and nonstationary
nature. Moreover, if agents must share a limited resource (e.g. network
bandwidth) they must all learn how to coordinate resource use. This paper
introduces a deep recurrent multiagent actor-critic framework (R-MADDPG) for
handling multiagent coordination under partial observable set-tings and limited
communication. We investigate recurrency effects on performance and
communication use of a team of agents. We demonstrate that the resulting
framework learns time dependencies for sharing missing observations, handling
resource limitations, and developing different communication patterns among
agents.
Related papers
- Towards Collaborative Intelligence: Propagating Intentions and Reasoning for Multi-Agent Coordination with Large Language Models [41.95288786980204]
Current agent frameworks often suffer from dependencies on single-agent execution and lack robust inter- module communication.
We present a framework for training large language models as collaborative agents to enable coordinated behaviors in cooperative MARL.
A propagation network transforms broadcast intentions into teammate-specific communication messages, sharing relevant goals with designated teammates.
arXiv Detail & Related papers (2024-07-17T13:14:00Z) - MADiff: Offline Multi-agent Learning with Diffusion Models [79.18130544233794]
Diffusion model (DM) recently achieved huge success in various scenarios including offline reinforcement learning.
We propose MADiff, a novel generative multi-agent learning framework to tackle this problem.
Our experiments show the superior performance of MADiff compared to baseline algorithms in a wide range of multi-agent learning tasks.
arXiv Detail & Related papers (2023-05-27T02:14:09Z) - Learning Reward Machines in Cooperative Multi-Agent Tasks [75.79805204646428]
This paper presents a novel approach to Multi-Agent Reinforcement Learning (MARL)
It combines cooperative task decomposition with the learning of reward machines (RMs) encoding the structure of the sub-tasks.
The proposed method helps deal with the non-Markovian nature of the rewards in partially observable environments.
arXiv Detail & Related papers (2023-03-24T15:12:28Z) - Learning From Good Trajectories in Offline Multi-Agent Reinforcement
Learning [98.07495732562654]
offline multi-agent reinforcement learning (MARL) aims to learn effective multi-agent policies from pre-collected datasets.
One agent learned by offline MARL often inherits this random policy, jeopardizing the performance of the entire team.
We propose a novel framework called Shared Individual Trajectories (SIT) to address this problem.
arXiv Detail & Related papers (2022-11-28T18:11:26Z) - RACA: Relation-Aware Credit Assignment for Ad-Hoc Cooperation in
Multi-Agent Deep Reinforcement Learning [55.55009081609396]
We propose a novel method, called Relation-Aware Credit Assignment (RACA), which achieves zero-shot generalization in ad-hoc cooperation scenarios.
RACA takes advantage of a graph-based encoder relation to encode the topological structure between agents.
Our method outperforms baseline methods on the StarCraftII micromanagement benchmark and ad-hoc cooperation scenarios.
arXiv Detail & Related papers (2022-06-02T03:39:27Z) - MACRPO: Multi-Agent Cooperative Recurrent Policy Optimization [17.825845543579195]
We propose a new multi-agent actor-critic method called textitMulti-Agent Cooperative Recurrent Proximal Policy Optimization (MACRPO)
We use a recurrent layer in critic's network architecture and propose a new framework to use a meta-trajectory to train the recurrent layer.
We evaluate our algorithm on three challenging multi-agent environments with continuous and discrete action spaces.
arXiv Detail & Related papers (2021-09-02T12:43:35Z) - AoI-Aware Resource Allocation for Platoon-Based C-V2X Networks via
Multi-Agent Multi-Task Reinforcement Learning [22.890835786710316]
This paper investigates the problem of age of information (AoI) aware radio resource management for a platooning system.
Multiple autonomous platoons exploit the cellular wireless vehicle-to-everything (C-V2X) communication technology to disseminate the cooperative awareness messages (CAMs) to their followers.
We exploit a distributed resource allocation framework based on multi-agent reinforcement learning (MARL), where each platoon leader (PL) acts as an agent and interacts with the environment to learn its optimal policy.
arXiv Detail & Related papers (2021-05-10T08:39:56Z) - Information State Embedding in Partially Observable Cooperative
Multi-Agent Reinforcement Learning [19.617644643147948]
We introduce the concept of an information state embedding that serves to compress agents' histories.
We quantify how the compression error influences the resulting value functions for decentralized control.
The proposed embed-then-learn pipeline opens the black-box of existing (partially observable) MARL algorithms.
arXiv Detail & Related papers (2020-04-02T16:03:42Z) - A Visual Communication Map for Multi-Agent Deep Reinforcement Learning [7.003240657279981]
Multi-agent learning poses significant challenges in the effort to allocate a concealed communication medium.
Recent studies typically combine a specialized neural network with reinforcement learning to enable communication between agents.
This paper proposes a more scalable approach that not only deals with a great number of agents but also enables collaboration between dissimilar functional agents.
arXiv Detail & Related papers (2020-02-27T02:38:21Z) - Multi-Agent Interactions Modeling with Correlated Policies [53.38338964628494]
In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework.
We develop a Decentralized Adrial Imitation Learning algorithm with Correlated policies (CoDAIL)
Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators.
arXiv Detail & Related papers (2020-01-04T17:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.