MushroomRL: Simplifying Reinforcement Learning Research
- URL: http://arxiv.org/abs/2001.01102v2
- Date: Thu, 9 Jan 2020 15:11:21 GMT
- Title: MushroomRL: Simplifying Reinforcement Learning Research
- Authors: Carlo D'Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli and
Jan Peters
- Abstract summary: MushroomRL is an open-source Python library developed to simplify the process of implementing and running Reinforcement Learning (RL) experiments.
Compared to other available libraries, MushroomRL has been created with the purpose of providing a comprehensive and flexible framework to minimize the effort in implementing and testing novel RL methodologies.
- Score: 60.70556446270147
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: MushroomRL is an open-source Python library developed to simplify the process
of implementing and running Reinforcement Learning (RL) experiments. Compared
to other available libraries, MushroomRL has been created with the purpose of
providing a comprehensive and flexible framework to minimize the effort in
implementing and testing novel RL methodologies. Indeed, the architecture of
MushroomRL is built in such a way that every component of an RL problem is
already provided, and most of the time users can only focus on the
implementation of their own algorithms and experiments. The result is a library
from which RL researchers can significantly benefit in the critical phase of
the empirical analysis of their works. MushroomRL stable code, tutorials and
documentation can be found at https://github.com/MushroomRL/mushroom-rl.
Related papers
- Open RL Benchmark: Comprehensive Tracked Experiments for Reinforcement
Learning [41.971465819626005]
We present Open RL Benchmark, a set of fully tracked RL experiments.
Open RL Benchmark is community-driven: anyone can download, use, and contribute to the data.
Special care is taken to ensure that each experiment is precisely reproducible.
arXiv Detail & Related papers (2024-02-05T14:32:00Z) - OpenRL: A Unified Reinforcement Learning Framework [19.12129820612253]
We present OpenRL, an advanced reinforcement learning (RL) framework.
It is designed to accommodate a diverse array of tasks, from single-agent challenges to complex multi-agent systems.
It integrates Natural Language Processing (NLP) with RL, enabling researchers to address a combination of RL training and language-centric tasks effectively.
arXiv Detail & Related papers (2023-12-20T12:04:06Z) - SRL: Scaling Distributed Reinforcement Learning to Over Ten Thousand Cores [13.948640763797776]
We present a novel abstraction on the dataflows of RL training, which unifies diverse RL training applications into a general framework.
We develop a scalable, efficient, and distributed RL system called ReaLly scalableRL, which allows efficient and massively parallelized training.
SRL is the first in the academic community to perform RL experiments at a large scale with over 15k CPU cores.
arXiv Detail & Related papers (2023-06-29T05:16:25Z) - RLtools: A Fast, Portable Deep Reinforcement Learning Library for Continuous Control [7.259696592534715]
Deep Reinforcement Learning (RL) can yield capable agents and control policies in several domains but is commonly plagued by prohibitively long training times.
We present RLtools, a dependency-free, header-only, pure C++ library for deep supervised and reinforcement learning.
arXiv Detail & Related papers (2023-06-06T09:26:43Z) - LCRL: Certified Policy Synthesis via Logically-Constrained Reinforcement
Learning [78.2286146954051]
LCRL implements model-free Reinforcement Learning (RL) algorithms over unknown Decision Processes (MDPs)
We present case studies to demonstrate the applicability, ease of use, scalability, and performance of LCRL.
arXiv Detail & Related papers (2022-09-21T13:21:00Z) - Recurrent Model-Free RL is a Strong Baseline for Many POMDPs [73.39666827525782]
Many problems in RL, such as meta RL, robust RL, and generalization in RL, can be cast as POMDPs.
In theory, simply augmenting model-free RL with memory, such as recurrent neural networks, provides a general approach to solving all types of POMDPs.
Prior work has found that such recurrent model-free RL methods tend to perform worse than more specialized algorithms that are designed for specific types of POMDPs.
arXiv Detail & Related papers (2021-10-11T07:09:14Z) - RL-DARTS: Differentiable Architecture Search for Reinforcement Learning [62.95469460505922]
We introduce RL-DARTS, one of the first applications of Differentiable Architecture Search (DARTS) in reinforcement learning (RL)
By replacing the image encoder with a DARTS supernet, our search method is sample-efficient, requires minimal extra compute resources, and is also compatible with off-policy and on-policy RL algorithms, needing only minor changes in preexisting code.
We show that the supernet gradually learns better cells, leading to alternative architectures which can be highly competitive against manually designed policies, but also verify previous design choices for RL policies.
arXiv Detail & Related papers (2021-06-04T03:08:43Z) - EasyRL: A Simple and Extensible Reinforcement Learning Framework [3.2173369911280023]
EasyRL provides an interactive graphical user interface for users to train and evaluate RL agents.
EasyRL does not require programming knowledge for training and testing simple built-in RL agents.
EasyRL also supports custom RL agents and environments, which can be highly beneficial for RL researchers in evaluating and comparing their RL models.
arXiv Detail & Related papers (2020-08-04T17:02:56Z) - Learning to Prune Deep Neural Networks via Reinforcement Learning [64.85939668308966]
PuRL is a deep reinforcement learning based algorithm for pruning neural networks.
It achieves sparsity and accuracy comparable to current state-of-the-art methods.
arXiv Detail & Related papers (2020-07-09T13:06:07Z) - RL Unplugged: A Suite of Benchmarks for Offline Reinforcement Learning [108.9599280270704]
We propose a benchmark called RL Unplugged to evaluate and compare offline RL methods.
RL Unplugged includes data from a diverse range of domains including games and simulated motor control problems.
We will release data for all our tasks and open-source all algorithms presented in this paper.
arXiv Detail & Related papers (2020-06-24T17:14:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.