CleanRL: High-quality Single-file Implementations of Deep Reinforcement
Learning Algorithms
- URL: http://arxiv.org/abs/2111.08819v1
- Date: Tue, 16 Nov 2021 22:44:56 GMT
- Title: CleanRL: High-quality Single-file Implementations of Deep Reinforcement
Learning Algorithms
- Authors: Shengyi Huang, Rousslan Fernand Julien Dossa, Chang Ye, Jeff Braga
- Abstract summary: CleanRL is an open-source library that provides high-quality single-file implementations of Deep Reinforcement Learning algorithms.
It provides a simpler yet scalable developing experience by having a straightforward and integrating production tools.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: CleanRL is an open-source library that provides high-quality single-file
implementations of Deep Reinforcement Learning algorithms. It provides a
simpler yet scalable developing experience by having a straightforward codebase
and integrating production tools to help interact and scale experiments. In
CleanRL, we put all details of an algorithm into a single file, making these
performance-relevant details easier to recognize. Additionally, an experiment
tracking feature is available to help log metrics, hyperparameters, videos of
an agent's gameplay, dependencies, and more to the cloud. Despite succinct
implementations, we have also designed tools to help scale, at one point
orchestrating experiments on more than 2000 machines simultaneously via Docker
and cloud providers. Finally, we have ensured the quality of the
implementations by benchmarking against a variety of environments. The source
code of CleanRL can be found at https://github.com/vwxyzjn/cleanrl
Related papers
- Advanced Detection of Source Code Clones via an Ensemble of Unsupervised Similarity Measures [0.0]
This research introduces a novel ensemble learning approach for code similarity assessment.
The key idea is that the strengths of a diverse set of similarity measures can complement each other and mitigate individual weaknesses.
arXiv Detail & Related papers (2024-05-03T13:42:49Z) - Towards Efficient Fine-tuning of Pre-trained Code Models: An
Experimental Study and Beyond [52.656743602538825]
Fine-tuning pre-trained code models incurs a large computational cost.
We conduct an experimental study to explore what happens to layer-wise pre-trained representations and their encoded code knowledge during fine-tuning.
We propose Telly to efficiently fine-tune pre-trained code models via layer freezing.
arXiv Detail & Related papers (2023-04-11T13:34:13Z) - CORL: Research-oriented Deep Offline Reinforcement Learning Library [48.47248460865739]
CORL is an open-source library that provides thoroughly benchmarked single-file implementations of reinforcement learning algorithms.
It emphasizes a simple developing experience with a straightforward and a modern analysis tracking tool.
arXiv Detail & Related papers (2022-10-13T15:40:11Z) - DiSparse: Disentangled Sparsification for Multitask Model Compression [92.84435347164435]
DiSparse is a simple, effective, and first-of-its-kind multitask pruning and sparse training scheme.
Our experimental results demonstrate superior performance on various configurations and settings.
arXiv Detail & Related papers (2022-06-09T17:57:46Z) - Parallel Actors and Learners: A Framework for Generating Scalable RL
Implementations [14.432131909590824]
Reinforcement Learning (RL) has achieved significant success in application domains such as robotics, games, health care and others.
Current implementations exhibit poor performance due to challenges such as irregular memory accesses and synchronization overheads.
We propose a framework for generating scalable reinforcement learning implementations on multicore systems.
arXiv Detail & Related papers (2021-10-03T21:00:53Z) - Generative and reproducible benchmarks for comprehensive evaluation of
machine learning classifiers [6.605210393590192]
DIverse and GENerative ML Benchmark (DIGEN) is a collection of synthetic datasets for benchmarking of machine learning algorithms.
The resource with extensive documentation and analyses is open-source and available on GitHub.
arXiv Detail & Related papers (2021-07-14T03:58:02Z) - How Useful is Self-Supervised Pretraining for Visual Tasks? [133.1984299177874]
We evaluate various self-supervised algorithms across a comprehensive array of synthetic datasets and downstream tasks.
Our experiments offer insights into how the utility of self-supervision changes as the number of available labels grows.
arXiv Detail & Related papers (2020-03-31T16:03:22Z) - Image Matching across Wide Baselines: From Paper to Practice [80.9424750998559]
We introduce a comprehensive benchmark for local features and robust estimation algorithms.
Our pipeline's modular structure allows easy integration, configuration, and combination of different methods.
We show that with proper settings, classical solutions may still outperform the perceived state of the art.
arXiv Detail & Related papers (2020-03-03T15:20:57Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.