Scalable Multi-Agent Reinforcement Learning through Intelligent
Information Aggregation
- URL: http://arxiv.org/abs/2211.02127v3
- Date: Tue, 16 May 2023 15:17:46 GMT
- Title: Scalable Multi-Agent Reinforcement Learning through Intelligent
Information Aggregation
- Authors: Siddharth Nayak, Kenneth Choi, Wenqi Ding, Sydney Dolan, Karthik
Gopalakrishnan, Hamsa Balakrishnan
- Abstract summary: We propose a novel architecture for multi-agent reinforcement learning (MARL) which uses local information intelligently to compute paths for all the agents in a decentralized manner.
InforMARL aggregates information about the local neighborhood of agents for both the actor and the critic using a graph neural network and can be used in conjunction with any standard MARL algorithm.
- Score: 6.09506921406322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of multi-agent navigation and collision avoidance
when observations are limited to the local neighborhood of each agent. We
propose InforMARL, a novel architecture for multi-agent reinforcement learning
(MARL) which uses local information intelligently to compute paths for all the
agents in a decentralized manner. Specifically, InforMARL aggregates
information about the local neighborhood of agents for both the actor and the
critic using a graph neural network and can be used in conjunction with any
standard MARL algorithm. We show that (1) in training, InforMARL has better
sample efficiency and performance than baseline approaches, despite using less
information, and (2) in testing, it scales well to environments with arbitrary
numbers of agents and obstacles. We illustrate these results using four task
environments, including one with predetermined goals for each agent, and one in
which the agents collectively try to cover all goals. Code available at
https://github.com/nsidn98/InforMARL.
Related papers
- Decentralized Monte Carlo Tree Search for Partially Observable
Multi-agent Pathfinding [49.730902939565986]
Multi-Agent Pathfinding problem involves finding a set of conflict-free paths for a group of agents confined to a graph.
In this study, we focus on the decentralized MAPF setting, where the agents may observe the other agents only locally.
We propose a decentralized multi-agent Monte Carlo Tree Search (MCTS) method for MAPF tasks.
arXiv Detail & Related papers (2023-12-26T06:57:22Z) - Deep Multi-Agent Reinforcement Learning for Decentralized Active
Hypothesis Testing [11.639503711252663]
We tackle the multi-agent active hypothesis testing (AHT) problem by introducing a novel algorithm rooted in the framework of deep multi-agent reinforcement learning.
We present a comprehensive set of experimental results that effectively showcase the agents' ability to learn collaborative strategies and enhance performance.
arXiv Detail & Related papers (2023-09-14T01:18:04Z) - MA2CL:Masked Attentive Contrastive Learning for Multi-Agent
Reinforcement Learning [128.19212716007794]
We propose an effective framework called textbfMulti-textbfAgent textbfMasked textbfAttentive textbfContrastive textbfLearning (MA2CL)
MA2CL encourages learning representation to be both temporal and agent-level predictive by reconstructing the masked agent observation in latent space.
Our method significantly improves the performance and sample efficiency of different MARL algorithms and outperforms other methods in various vision-based and state-based scenarios.
arXiv Detail & Related papers (2023-06-03T05:32:19Z) - Learning From Good Trajectories in Offline Multi-Agent Reinforcement
Learning [98.07495732562654]
offline multi-agent reinforcement learning (MARL) aims to learn effective multi-agent policies from pre-collected datasets.
One agent learned by offline MARL often inherits this random policy, jeopardizing the performance of the entire team.
We propose a novel framework called Shared Individual Trajectories (SIT) to address this problem.
arXiv Detail & Related papers (2022-11-28T18:11:26Z) - MARLlib: A Scalable and Efficient Multi-agent Reinforcement Learning
Library [82.77446613763809]
We present MARLlib, a library designed to offer fast development for multi-agent tasks and algorithm combinations.
MARLlib can effectively disentangle the intertwined nature of the multi-agent task and the learning process of the algorithm.
The library's source code is publicly accessible on GitHub.
arXiv Detail & Related papers (2022-10-11T03:11:12Z) - On the Use and Misuse of Absorbing States in Multi-agent Reinforcement
Learning [55.95253619768565]
Current MARL algorithms assume that the number of agents within a group remains fixed throughout an experiment.
In many practical problems, an agent may terminate before their teammates.
We present a novel architecture for an existing state-of-the-art MARL algorithm which uses attention instead of a fully connected layer with absorbing states.
arXiv Detail & Related papers (2021-11-10T23:45:08Z) - AoI-Aware Resource Allocation for Platoon-Based C-V2X Networks via
Multi-Agent Multi-Task Reinforcement Learning [22.890835786710316]
This paper investigates the problem of age of information (AoI) aware radio resource management for a platooning system.
Multiple autonomous platoons exploit the cellular wireless vehicle-to-everything (C-V2X) communication technology to disseminate the cooperative awareness messages (CAMs) to their followers.
We exploit a distributed resource allocation framework based on multi-agent reinforcement learning (MARL), where each platoon leader (PL) acts as an agent and interacts with the environment to learn its optimal policy.
arXiv Detail & Related papers (2021-05-10T08:39:56Z) - Agent-Centric Representations for Multi-Agent Reinforcement Learning [12.577354830985012]
We investigate whether object-centric representations are also beneficial in the fully cooperative multi-agent reinforcement learning setting.
Specifically, we study two ways of incorporating an agent-centric inductive bias into our RL algorithm.
We evaluate these approaches on the Google Research Football environment as well as DeepMind Lab 2D.
arXiv Detail & Related papers (2021-04-19T15:43:40Z) - HAMMER: Multi-Level Coordination of Reinforcement Learning Agents via
Learned Messaging [14.960795846548029]
Cooperative multi-agent reinforcement learning (MARL) has achieved significant results, most notably by leveraging the representation learning abilities of deep neural networks.
This paper considers the case where there is a single, powerful, central agent that can observe the entire observation space, and there are multiple, low powered, local agents that can only receive local observations and cannot communicate with each other.
The job of the central agent is to learn what message to send to different local agents, based on the global observations, but by determining what additional information an individual agent should receive so that it can make a better decision.
arXiv Detail & Related papers (2021-01-18T19:00:12Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.