CORL: Research-oriented Deep Offline Reinforcement Learning Library
- URL: http://arxiv.org/abs/2210.07105v4
- Date: Thu, 26 Oct 2023 19:18:14 GMT
- Title: CORL: Research-oriented Deep Offline Reinforcement Learning Library
- Authors: Denis Tarasov, Alexander Nikulin, Dmitry Akimov, Vladislav Kurenkov,
Sergey Kolesnikov
- Abstract summary: CORL is an open-source library that provides thoroughly benchmarked single-file implementations of reinforcement learning algorithms.
It emphasizes a simple developing experience with a straightforward and a modern analysis tracking tool.
- Score: 48.47248460865739
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: CORL is an open-source library that provides thoroughly benchmarked
single-file implementations of both deep offline and offline-to-online
reinforcement learning algorithms. It emphasizes a simple developing experience
with a straightforward codebase and a modern analysis tracking tool. In CORL,
we isolate methods implementation into separate single files, making
performance-relevant details easier to recognize. Additionally, an experiment
tracking feature is available to help log metrics, hyperparameters,
dependencies, and more to the cloud. Finally, we have ensured the reliability
of the implementations by benchmarking commonly employed D4RL datasets
providing a transparent source of results that can be reused for robust
evaluation tools such as performance profiles, probability of improvement, or
expected online performance.
Related papers
- Simple Ingredients for Offline Reinforcement Learning [86.1988266277766]
offline reinforcement learning algorithms have proven effective on datasets highly connected to the target downstream task.
We show that existing methods struggle with diverse data: their performance considerably deteriorates as data collected for related but different tasks is simply added to the offline buffer.
We show that scale, more than algorithmic considerations, is the key factor influencing performance.
arXiv Detail & Related papers (2024-03-19T18:57:53Z) - Open RL Benchmark: Comprehensive Tracked Experiments for Reinforcement
Learning [41.971465819626005]
We present Open RL Benchmark, a set of fully tracked RL experiments.
Open RL Benchmark is community-driven: anyone can download, use, and contribute to the data.
Special care is taken to ensure that each experiment is precisely reproducible.
arXiv Detail & Related papers (2024-02-05T14:32:00Z) - Efficient Online Reinforcement Learning with Offline Data [78.92501185886569]
We show that we can simply apply existing off-policy methods to leverage offline data when learning online.
We extensively ablate these design choices, demonstrating the key factors that most affect performance.
We see that correct application of these simple recommendations can provide a $mathbf2.5times$ improvement over existing approaches.
arXiv Detail & Related papers (2023-02-06T17:30:22Z) - Benchmarks and Algorithms for Offline Preference-Based Reward Learning [41.676208473752425]
We propose an approach that uses an offline dataset to craft preference queries via pool-based active learning.
Our proposed approach does not require actual physical rollouts or an accurate simulator for either the reward learning or policy optimization steps.
arXiv Detail & Related papers (2023-01-03T23:52:16Z) - Challenges and Opportunities in Offline Reinforcement Learning from
Visual Observations [58.758928936316785]
offline reinforcement learning from visual observations with continuous action spaces remains under-explored.
We show that modifications to two popular vision-based online reinforcement learning algorithms suffice to outperform existing offline RL methods.
arXiv Detail & Related papers (2022-06-09T22:08:47Z) - Importance of Empirical Sample Complexity Analysis for Offline
Reinforcement Learning [55.90351453865001]
We ask the question of the dependency on the number of samples for learning from offline data.
Our objective is to emphasize that studying sample complexity for offline RL is important, and is an indicator of the usefulness of existing offline algorithms.
arXiv Detail & Related papers (2021-12-31T18:05:33Z) - CleanRL: High-quality Single-file Implementations of Deep Reinforcement
Learning Algorithms [0.0]
CleanRL is an open-source library that provides high-quality single-file implementations of Deep Reinforcement Learning algorithms.
It provides a simpler yet scalable developing experience by having a straightforward and integrating production tools.
arXiv Detail & Related papers (2021-11-16T22:44:56Z) - Offline Reinforcement Learning with Value-based Episodic Memory [19.12430651038357]
offline reinforcement learning (RL) shows promise of applying RL to real-world problems.
We propose Expectile V-Learning (EVL), which smoothly interpolates between the optimal value learning and behavior cloning.
We present a new offline method called Value-based Episodic Memory (VEM)
arXiv Detail & Related papers (2021-10-19T08:20:11Z) - D4RL: Datasets for Deep Data-Driven Reinforcement Learning [119.49182500071288]
We introduce benchmarks specifically designed for the offline setting, guided by key properties of datasets relevant to real-world applications of offline RL.
By moving beyond simple benchmark tasks and data collected by partially-trained RL agents, we reveal important and unappreciated deficiencies of existing algorithms.
arXiv Detail & Related papers (2020-04-15T17:18:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.