SEA: A Spatially Explicit Architecture for Multi-Agent Reinforcement
Learning
- URL: http://arxiv.org/abs/2304.12532v1
- Date: Tue, 25 Apr 2023 03:00:09 GMT
- Title: SEA: A Spatially Explicit Architecture for Multi-Agent Reinforcement
Learning
- Authors: Dapeng Li, Zhiwei Xu, Bin Zhang, Guoliang Fan
- Abstract summary: We propose a spatial information extraction structure for multi-agent reinforcement learning.
Agents can effectively share the neighborhood and global information through a spatially encoder-decoder structure.
- Score: 14.935456456463731
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatial information is essential in various fields. How to explicitly model
according to the spatial location of agents is also very important for the
multi-agent problem, especially when the number of agents is changing and the
scale is enormous. Inspired by the point cloud task in computer vision, we
propose a spatial information extraction structure for multi-agent
reinforcement learning in this paper. Agents can effectively share the
neighborhood and global information through a spatially encoder-decoder
structure. Our method follows the centralized training with decentralized
execution (CTDE) paradigm. In addition, our structure can be applied to various
existing mainstream reinforcement learning algorithms with minor modifications
and can deal with the problem with a variable number of agents. The experiments
in several multi-agent scenarios show that the existing methods can get
convincing results by adding our spatially explicit architecture.
Related papers
- Very Large-Scale Multi-Agent Simulation in AgentScope [112.98986800070581]
We develop new features and components for AgentScope, a user-friendly multi-agent platform.
We propose an actor-based distributed mechanism towards great scalability and high efficiency.
We also provide a web-based interface for conveniently monitoring and managing a large number of agents.
arXiv Detail & Related papers (2024-07-25T05:50:46Z) - Decentralized Transformers with Centralized Aggregation are Sample-Efficient Multi-Agent World Models [106.94827590977337]
We propose a novel world model for Multi-Agent RL (MARL) that learns decentralized local dynamics for scalability.
We also introduce a Perceiver Transformer as an effective solution to enable centralized representation aggregation.
Results on Starcraft Multi-Agent Challenge (SMAC) show that it outperforms strong model-free approaches and existing model-based methods in both sample efficiency and overall performance.
arXiv Detail & Related papers (2024-06-22T12:40:03Z) - AgentScope: A Flexible yet Robust Multi-Agent Platform [66.64116117163755]
AgentScope is a developer-centric multi-agent platform with message exchange as its core communication mechanism.
The abundant syntactic tools, built-in agents and service functions, user-friendly interfaces for application demonstration and utility monitor, zero-code programming workstation, and automatic prompt tuning mechanism significantly lower the barriers to both development and deployment.
arXiv Detail & Related papers (2024-02-21T04:11:28Z) - An Interactive Agent Foundation Model [49.77861810045509]
We propose an Interactive Agent Foundation Model that uses a novel multi-task agent training paradigm for training AI agents.
Our training paradigm unifies diverse pre-training strategies, including visual masked auto-encoders, language modeling, and next-action prediction.
We demonstrate the performance of our framework across three separate domains -- Robotics, Gaming AI, and Healthcare.
arXiv Detail & Related papers (2024-02-08T18:58:02Z) - MASP: Scalable GNN-based Planning for Multi-Agent Navigation [17.788592987873905]
We propose a goal-conditioned hierarchical planner for navigation tasks with a substantial number of agents.
We also leverage graph neural networks (GNN) to model the interaction between agents and goals, improving goal achievement.
The results demonstrate that MASP outperforms classical planning-based competitors and RL baselines.
arXiv Detail & Related papers (2023-12-05T06:05:04Z) - Scalable Multi-agent Covering Option Discovery based on Kronecker Graphs [49.71319907864573]
In this paper, we propose multi-agent skill discovery which enables the ease of decomposition.
Our key idea is to approximate the joint state space as a Kronecker graph, based on which we can directly estimate its Fiedler vector.
Considering that directly computing the Laplacian spectrum is intractable for tasks with infinite-scale state spaces, we further propose a deep learning extension of our method.
arXiv Detail & Related papers (2023-07-21T14:53:12Z) - MADiff: Offline Multi-agent Learning with Diffusion Models [79.18130544233794]
Diffusion model (DM) recently achieved huge success in various scenarios including offline reinforcement learning.
We propose MADiff, a novel generative multi-agent learning framework to tackle this problem.
Our experiments show the superior performance of MADiff compared to baseline algorithms in a wide range of multi-agent learning tasks.
arXiv Detail & Related papers (2023-05-27T02:14:09Z) - K-nearest Multi-agent Deep Reinforcement Learning for Collaborative
Tasks with a Variable Number of Agents [13.110291070230815]
We propose a new deep reinforcement learning algorithm for multi-agent collaborative tasks with a variable number of agents.
We demonstrate the application of our algorithm using a fleet management simulator developed by Hitachi to generate realistic scenarios in a production site.
arXiv Detail & Related papers (2022-01-18T16:14:24Z) - Meta-CPR: Generalize to Unseen Large Number of Agents with Communication
Pattern Recognition Module [29.75594940509839]
We formulate a multi-agent environment with a different number of agents as a multi-tasking problem.
We propose a meta reinforcement learning (meta-RL) framework to tackle this problem.
The proposed framework employs a meta-learned Communication Pattern Recognition (CPR) module to identify communication behavior.
arXiv Detail & Related papers (2021-12-14T08:23:04Z) - Learning Efficient Multi-Agent Cooperative Visual Exploration [18.42493808094464]
We consider the task of visual indoor exploration with multiple agents, where the agents need to cooperatively explore the entire indoor region using as few steps as possible.
We extend the state-of-the-art single-agent RL solution, Active Neural SLAM (ANS), to the multi-agent setting by introducing a novel RL-based global-goal planner, Spatial Coordination Planner ( SCP)
SCP leverages spatial information from each individual agent in an end-to-end manner and effectively guides the agents to navigate towards different spatial goals with high exploration efficiency.
arXiv Detail & Related papers (2021-10-12T04:48:10Z) - Scalable Multi-Agent Reinforcement Learning for Networked Systems with
Average Reward [17.925681736096482]
It has long been recognized that multi-agent reinforcement learning (MARL) faces significant scalability issues.
In this paper, we identify a rich class of networked MARL problems where the model exhibits a local dependence structure that allows it to be solved in a scalable manner.
arXiv Detail & Related papers (2020-06-11T17:23:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.