From Multi-agent to Multi-robot: A Scalable Training and Evaluation
Platform for Multi-robot Reinforcement Learning
- URL: http://arxiv.org/abs/2206.09590v1
- Date: Mon, 20 Jun 2022 06:36:45 GMT
- Title: From Multi-agent to Multi-robot: A Scalable Training and Evaluation
Platform for Multi-robot Reinforcement Learning
- Authors: Zhiuxan Liang, Jiannong Cao, Shan Jiang, Divya Saxena, Jinlin Chen,
Huafeng Xu
- Abstract summary: Multi-agent reinforcement learning (MARL) has been gaining extensive attention from academia and industries in the past few decades.
It remains unknown how these methods perform in real-world scenarios, especially multi-robot systems.
This paper introduces a scalable emulation platform for multi-robot reinforcement learning (MRRL) called SMART to meet this need.
- Score: 12.74238738538799
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-agent reinforcement learning (MARL) has been gaining extensive
attention from academia and industries in the past few decades. One of the
fundamental problems in MARL is how to evaluate different approaches
comprehensively. Most existing MARL methods are evaluated in either video games
or simplistic simulated scenarios. It remains unknown how these methods perform
in real-world scenarios, especially multi-robot systems. This paper introduces
a scalable emulation platform for multi-robot reinforcement learning (MRRL)
called SMART to meet this need. Precisely, SMART consists of two components: 1)
a simulation environment that provides a variety of complex interaction
scenarios for training and 2) a real-world multi-robot system for realistic
performance evaluation. Besides, SMART offers agent-environment APIs that are
plug-and-play for algorithm implementation. To illustrate the practicality of
our platform, we conduct a case study on the cooperative driving lane change
scenario. Building off the case study, we summarize several unique challenges
of MRRL, which are rarely considered previously. Finally, we open-source the
simulation environments, associated benchmark tasks, and state-of-the-art
baselines to encourage and empower MRRL research.
Related papers
- Multi-Agent Reinforcement Learning for Autonomous Driving: A Survey [14.73689900685646]
Reinforcement Learning (RL) is a potent tool for sequential decision-making and has achieved performance surpassing human capabilities.
As the extension of RL in the multi-agent system domain, multi-agent RL (MARL) not only need to learn the control policy but also requires consideration regarding interactions with all other agents in the environment.
Simulators are crucial to obtain realistic data, which is the fundamentals of RL.
arXiv Detail & Related papers (2024-08-19T03:31:20Z) - POGEMA: A Benchmark Platform for Cooperative Multi-Agent Navigation [76.67608003501479]
We introduce and specify an evaluation protocol defining a range of domain-related metrics computed on the basics of the primary evaluation indicators.
The results of such a comparison, which involves a variety of state-of-the-art MARL, search-based, and hybrid methods, are presented.
arXiv Detail & Related papers (2024-07-20T16:37:21Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - MAexp: A Generic Platform for RL-based Multi-Agent Exploration [5.672198570643586]
Existing platforms suffer from the inefficiency in sampling and the lack of diversity in Multi-Agent Reinforcement Learning (MARL) algorithms.
We propose MAexp, a generic platform for multi-agent exploration that integrates a broad range of state-of-the-art MARL algorithms and representative scenarios.
arXiv Detail & Related papers (2024-04-19T12:00:10Z) - SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - A Versatile Multi-Agent Reinforcement Learning Benchmark for Inventory
Management [16.808873433821464]
Multi-agent reinforcement learning (MARL) models multiple agents that interact and learn within a shared environment.
Applying MARL to real-world scenarios is impeded by many challenges such as scaling up, complex agent interactions, and non-stationary dynamics.
arXiv Detail & Related papers (2023-06-13T05:22:30Z) - Distributed Reinforcement Learning for Robot Teams: A Review [10.92709534981466]
Recent advances in sensing, actuation, and computation have opened the door to multi-robot systems.
Community has leveraged model-free multi-agent reinforcement learning to devise efficient, scalable controllers for multi-robot systems.
Recent findings: Decentralized MRS face fundamental challenges, such as non-stationarity and partial observability.
arXiv Detail & Related papers (2022-04-07T15:34:19Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - MALib: A Parallel Framework for Population-based Multi-agent
Reinforcement Learning [61.28547338576706]
Population-based multi-agent reinforcement learning (PB-MARL) refers to the series of methods nested with reinforcement learning (RL) algorithms.
We present MALib, a scalable and efficient computing framework for PB-MARL.
arXiv Detail & Related papers (2021-06-05T03:27:08Z) - SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for
Autonomous Driving [96.50297622371457]
Multi-agent interaction is a fundamental aspect of autonomous driving in the real world.
Despite more than a decade of research and development, the problem of how to interact with diverse road users in diverse scenarios remains largely unsolved.
We develop a dedicated simulation platform called SMARTS that generates diverse and competent driving interactions.
arXiv Detail & Related papers (2020-10-19T18:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.