ElegantRL-Podracer: Scalable and Elastic Library for Cloud-Native Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2112.05923v1
- Date: Sat, 11 Dec 2021 06:31:21 GMT
- Title: ElegantRL-Podracer: Scalable and Elastic Library for Cloud-Native Deep
Reinforcement Learning
- Authors: Xiao-Yang Liu and Zechu Li and Zhuoran Yang and Jiahao Zheng and
Zhaoran Wang and Anwar Walid and Jian Guo and Michael I. Jordan
- Abstract summary: We present a library ElegantRL-podracer for cloud-native deep reinforcement learning.
It efficiently supports millions of cores to carry out massively parallel training at multiple levels.
At a low-level, each pod simulates agent-environment interactions in parallel by fully utilizing nearly 7,000 GPU cores in a single GPU.
- Score: 141.58588761593955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep reinforcement learning (DRL) has revolutionized learning and actuation
in applications such as game playing and robotic control. The cost of data
collection, i.e., generating transitions from agent-environment interactions,
remains a major challenge for wider DRL adoption in complex real-world
problems. Following a cloud-native paradigm to train DRL agents on a GPU cloud
platform is a promising solution. In this paper, we present a scalable and
elastic library ElegantRL-podracer for cloud-native deep reinforcement
learning, which efficiently supports millions of GPU cores to carry out
massively parallel training at multiple levels. At a high-level,
ElegantRL-podracer employs a tournament-based ensemble scheme to orchestrate
the training process on hundreds or even thousands of GPUs, scheduling the
interactions between a leaderboard and a training pool with hundreds of pods.
At a low-level, each pod simulates agent-environment interactions in parallel
by fully utilizing nearly 7,000 GPU CUDA cores in a single GPU. Our
ElegantRL-podracer library features high scalability, elasticity and
accessibility by following the development principles of containerization,
microservices and MLOps. Using an NVIDIA DGX SuperPOD cloud, we conduct
extensive experiments on various tasks in locomotion and stock trading and show
that ElegantRL-podracer substantially outperforms RLlib. Our codes are
available on GitHub.
Related papers
- Scaling Offline Model-Based RL via Jointly-Optimized World-Action Model Pretraining [49.730897226510095]
We introduce JOWA: Jointly-Reinforced World-Action model, an offline model-based RL agent pretrained on Atari games with 6 billion tokens data.
Our largest agent, with 150 million parameters, 78.9% human-level performance on pretrained games using only 10% subsampled offline data, outperforming existing state-of-the-art large-scale offline RL baselines by 31.6% on averange.
arXiv Detail & Related papers (2024-10-01T10:25:03Z) - NAVIX: Scaling MiniGrid Environments with JAX [17.944645332888335]
We introduce NAVIX, a re-implementation of MiniGrid in JAX.
NAVIX achieves over 200 000x speed improvements in batch mode, supporting up to 2048 agents in parallel on a single Nvidia A100 80 GB.
This reduces experiment times from one week to 15 minutes, promoting faster design and more scalable RL model development.
arXiv Detail & Related papers (2024-07-28T04:39:18Z) - A Benchmark Environment for Offline Reinforcement Learning in Racing Games [54.83171948184851]
Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL)
This paper introduces OfflineMania, a novel environment for ORL research.
It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine.
arXiv Detail & Related papers (2024-07-12T16:44:03Z) - XuanCe: A Comprehensive and Unified Deep Reinforcement Learning Library [18.603206638756056]
XuanCe is a comprehensive and unified deep reinforcement learning (DRL) library.
XuanCe offers a wide range of functionalities, including over 40 classical DRL and multi-agent DRL algorithms.
XuanCe is open-source and can be accessed at https://agi-brain.com/agi-brain/xuance.git.
arXiv Detail & Related papers (2023-12-25T14:45:39Z) - SRL: Scaling Distributed Reinforcement Learning to Over Ten Thousand Cores [13.948640763797776]
We present a novel abstraction on the dataflows of RL training, which unifies diverse RL training applications into a general framework.
We develop a scalable, efficient, and distributed RL system called ReaLly scalableRL, which allows efficient and massively parallelized training.
SRL is the first in the academic community to perform RL experiments at a large scale with over 15k CPU cores.
arXiv Detail & Related papers (2023-06-29T05:16:25Z) - RLtools: A Fast, Portable Deep Reinforcement Learning Library for
Continuous Control [8.159171440455824]
Deep Reinforcement Learning (RL) can yield capable agents and control policies in several domains but is commonly plagued by prohibitively long training times.
We present RLtools, a dependency-free, header-only, pure C++ library for deep supervised and reinforcement learning.
arXiv Detail & Related papers (2023-06-06T09:26:43Z) - M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion
Parameter Pretraining [55.16088793437898]
Training extreme-scale models requires enormous amounts of computes and memory footprint.
We propose a simple training strategy called "Pseudo-to-Real" for high-memory-footprint-required large models.
arXiv Detail & Related papers (2021-10-08T04:24:51Z) - WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement
Learning on a GPU [15.337470862838794]
We present WarpDrive, a flexible, lightweight, and easy-to-use open-source RL framework that implements end-to-end multi-agent RL on a single GPU.
Our design runs simulations and the agents in each simulation in parallel. It also uses a single simulation data store on the GPU that is safely updated in-place.
WarpDrive yields 2.9 million environment steps/second with 2000 environments and 1000 agents (at least 100x higher throughput compared to a CPU implementation) in a benchmark Tag simulation.
arXiv Detail & Related papers (2021-08-31T16:59:27Z) - Decoupling Representation Learning from Reinforcement Learning [89.82834016009461]
We introduce an unsupervised learning task called Augmented Temporal Contrast (ATC)
ATC trains a convolutional encoder to associate pairs of observations separated by a short time difference.
In online RL experiments, we show that training the encoder exclusively using ATC matches or outperforms end-to-end RL.
arXiv Detail & Related papers (2020-09-14T19:11:13Z) - RL Unplugged: A Suite of Benchmarks for Offline Reinforcement Learning [108.9599280270704]
We propose a benchmark called RL Unplugged to evaluate and compare offline RL methods.
RL Unplugged includes data from a diverse range of domains including games and simulated motor control problems.
We will release data for all our tasks and open-source all algorithms presented in this paper.
arXiv Detail & Related papers (2020-06-24T17:14:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.