d3rlpy: An Offline Deep Reinforcement Learning Library
- URL: http://arxiv.org/abs/2111.03788v1
- Date: Sat, 6 Nov 2021 03:09:39 GMT
- Title: d3rlpy: An Offline Deep Reinforcement Learning Library
- Authors: Takuma Seno, Michita Imai
- Abstract summary: We introduce d3rlpy, an open-sourced offline deep reinforcement learning (RL) library for Python.
d3rlpy supports a number of offline deep RL algorithms as well as online algorithms via a user-friendly API.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce d3rlpy, an open-sourced offline deep
reinforcement learning (RL) library for Python. d3rlpy supports a number of
offline deep RL algorithms as well as online algorithms via a user-friendly
API. To assist deep RL research and development projects, d3rlpy provides
practical and unique features such as data collection, exporting policies for
deployment, preprocessing and postprocessing, distributional Q-functions,
multi-step learning and a convenient command-line interface. Furthermore,
d3rlpy additionally provides a novel graphical interface that enables users to
train offline RL algorithms without coding programs. Lastly, the implemented
algorithms are benchmarked with D4RL datasets to ensure the implementation
quality. The d3rlpy source code can be found on GitHub:
\url{https://github.com/takuseno/d3rlpy}.
Related papers
- D5RL: Diverse Datasets for Data-Driven Deep Reinforcement Learning [99.33607114541861]
We propose a new benchmark for offline RL that focuses on realistic simulations of robotic manipulation and locomotion environments.
Our proposed benchmark covers state-based and image-based domains, and supports both offline RL and online fine-tuning evaluation.
arXiv Detail & Related papers (2024-08-15T22:27:00Z) - SRL: Scaling Distributed Reinforcement Learning to Over Ten Thousand Cores [13.948640763797776]
We present a novel abstraction on the dataflows of RL training, which unifies diverse RL training applications into a general framework.
We develop a scalable, efficient, and distributed RL system called ReaLly scalableRL, which allows efficient and massively parallelized training.
SRL is the first in the academic community to perform RL experiments at a large scale with over 15k CPU cores.
arXiv Detail & Related papers (2023-06-29T05:16:25Z) - RLtools: A Fast, Portable Deep Reinforcement Learning Library for Continuous Control [7.259696592534715]
Deep Reinforcement Learning (RL) can yield capable agents and control policies in several domains but is commonly plagued by prohibitively long training times.
We present RLtools, a dependency-free, header-only, pure C++ library for deep supervised and reinforcement learning.
arXiv Detail & Related papers (2023-06-06T09:26:43Z) - Improving and Benchmarking Offline Reinforcement Learning Algorithms [87.67996706673674]
This work aims to bridge the gaps caused by low-level choices and datasets.
We empirically investigate 20 implementation choices using three representative algorithms.
We find two variants CRR+ and CQL+ achieving new state-of-the-art on D4RL.
arXiv Detail & Related papers (2023-06-01T17:58:46Z) - Don't Change the Algorithm, Change the Data: Exploratory Data for
Offline Reinforcement Learning [147.61075994259807]
We propose Exploratory data for Offline RL (ExORL), a data-centric approach to offline RL.
ExORL first generates data with unsupervised reward-free exploration, then relabels this data with a downstream reward before training a policy with offline RL.
We find that exploratory data allows vanilla off-policy RL algorithms, without any offline-specific modifications, to outperform or match state-of-the-art offline RL algorithms on downstream tasks.
arXiv Detail & Related papers (2022-01-31T18:39:27Z) - ShinRL: A Library for Evaluating RL Algorithms from Theoretical and
Practical Perspectives [11.675763847424786]
We present ShinRL, an open-source library for evaluation of reinforcement learning (RL) algorithms.
ShinRL provides an RL environment interface that can compute metrics for delving into the behaviors of RL algorithms.
We show how combining these two features of ShinRL makes it easier to analyze the behavior of deep Q learning.
arXiv Detail & Related papers (2021-12-08T05:34:46Z) - SaLinA: Sequential Learning of Agents [13.822224899460656]
SaLinA is a library that makes implementing complex sequential learning models easy, including reinforcement learning algorithms.
It is built as an extension of PyTorch: algorithms coded with SALINA can be understood in few minutes by PyTorch users and modified easily.
arXiv Detail & Related papers (2021-10-15T07:50:35Z) - RL-DARTS: Differentiable Architecture Search for Reinforcement Learning [62.95469460505922]
We introduce RL-DARTS, one of the first applications of Differentiable Architecture Search (DARTS) in reinforcement learning (RL)
By replacing the image encoder with a DARTS supernet, our search method is sample-efficient, requires minimal extra compute resources, and is also compatible with off-policy and on-policy RL algorithms, needing only minor changes in preexisting code.
We show that the supernet gradually learns better cells, leading to alternative architectures which can be highly competitive against manually designed policies, but also verify previous design choices for RL policies.
arXiv Detail & Related papers (2021-06-04T03:08:43Z) - RLlib Flow: Distributed Reinforcement Learning is a Dataflow Problem [37.38316954355031]
We re-examine the challenges posed by distributed reinforcement learning.
We show that viewing RL as a dataflow problem leads to highly composable and performant implementations.
We propose RLlib Flow, a hybrid actor-dataflow programming model for distributed RL.
arXiv Detail & Related papers (2020-11-25T13:28:16Z) - RL Unplugged: A Suite of Benchmarks for Offline Reinforcement Learning [108.9599280270704]
We propose a benchmark called RL Unplugged to evaluate and compare offline RL methods.
RL Unplugged includes data from a diverse range of domains including games and simulated motor control problems.
We will release data for all our tasks and open-source all algorithms presented in this paper.
arXiv Detail & Related papers (2020-06-24T17:14:51Z) - MushroomRL: Simplifying Reinforcement Learning Research [60.70556446270147]
MushroomRL is an open-source Python library developed to simplify the process of implementing and running Reinforcement Learning (RL) experiments.
Compared to other available libraries, MushroomRL has been created with the purpose of providing a comprehensive and flexible framework to minimize the effort in implementing and testing novel RL methodologies.
arXiv Detail & Related papers (2020-01-04T17:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.