EasyRL: A Simple and Extensible Reinforcement Learning Framework
- URL: http://arxiv.org/abs/2008.01700v2
- Date: Thu, 5 Nov 2020 20:35:33 GMT
- Title: EasyRL: A Simple and Extensible Reinforcement Learning Framework
- Authors: Neil Hulbert, Sam Spillers, Brandon Francis, James Haines-Temons, Ken
Gil Romero, Benjamin De Jager, Sam Wong, Kevin Flora, Bowei Huang, Athirai A.
Irissappane
- Abstract summary: EasyRL provides an interactive graphical user interface for users to train and evaluate RL agents.
EasyRL does not require programming knowledge for training and testing simple built-in RL agents.
EasyRL also supports custom RL agents and environments, which can be highly beneficial for RL researchers in evaluating and comparing their RL models.
- Score: 3.2173369911280023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, Reinforcement Learning (RL), has become a popular field of
study as well as a tool for enterprises working on cutting-edge artificial
intelligence research. To this end, many researchers have built RL frameworks
such as openAI Gym and KerasRL for ease of use. While these works have made
great strides towards bringing down the barrier of entry for those new to RL,
we propose a much simpler framework called EasyRL, by providing an interactive
graphical user interface for users to train and evaluate RL agents. As it is
entirely graphical, EasyRL does not require programming knowledge for training
and testing simple built-in RL agents. EasyRL also supports custom RL agents
and environments, which can be highly beneficial for RL researchers in
evaluating and comparing their RL models.
Related papers
- A Benchmark Environment for Offline Reinforcement Learning in Racing Games [54.83171948184851]
Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL)
This paper introduces OfflineMania, a novel environment for ORL research.
It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine.
arXiv Detail & Related papers (2024-07-12T16:44:03Z) - OpenRL: A Unified Reinforcement Learning Framework [19.12129820612253]
We present OpenRL, an advanced reinforcement learning (RL) framework.
It is designed to accommodate a diverse array of tasks, from single-agent challenges to complex multi-agent systems.
It integrates Natural Language Processing (NLP) with RL, enabling researchers to address a combination of RL training and language-centric tasks effectively.
arXiv Detail & Related papers (2023-12-20T12:04:06Z) - SRL: Scaling Distributed Reinforcement Learning to Over Ten Thousand Cores [13.948640763797776]
We present a novel abstraction on the dataflows of RL training, which unifies diverse RL training applications into a general framework.
We develop a scalable, efficient, and distributed RL system called ReaLly scalableRL, which allows efficient and massively parallelized training.
SRL is the first in the academic community to perform RL experiments at a large scale with over 15k CPU cores.
arXiv Detail & Related papers (2023-06-29T05:16:25Z) - Contrastive Learning as Goal-Conditioned Reinforcement Learning [147.28638631734486]
In reinforcement learning (RL), it is easier to solve a task if given a good representation.
While deep RL should automatically acquire such good representations, prior work often finds that learning representations in an end-to-end fashion is unstable.
We show (contrastive) representation learning methods can be cast as RL algorithms in their own right.
arXiv Detail & Related papers (2022-06-15T14:34:15Z) - JORLDY: a fully customizable open source framework for reinforcement
learning [3.1864456096282696]
Reinforcement Learning (RL) has been actively researched in both academic and industrial fields.
JORLDY provides more than 20 widely used RL algorithms which are implemented with Pytorch.
JORLDY supports multiple RL environments which include OpenAI gym, Unity ML-Agents, Mujoco, Super Mario Bros and Procgen.
arXiv Detail & Related papers (2022-04-11T06:28:27Z) - All You Need Is Supervised Learning: From Imitation Learning to Meta-RL
With Upside Down RL [0.5735035463793008]
Upside down reinforcement learning (UDRL) flips the conventional use of the return in the objective function in RL upside down.
UDRL is based purely on supervised learning, and bypasses some prominent issues in RL: bootstrapping, off-policy corrections, and discount factors.
arXiv Detail & Related papers (2022-02-24T08:44:11Z) - Automated Reinforcement Learning (AutoRL): A Survey and Open Problems [92.73407630874841]
Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL.
We provide a common taxonomy, discuss each area in detail and pose open problems which would be of interest to researchers going forward.
arXiv Detail & Related papers (2022-01-11T12:41:43Z) - RL-DARTS: Differentiable Architecture Search for Reinforcement Learning [62.95469460505922]
We introduce RL-DARTS, one of the first applications of Differentiable Architecture Search (DARTS) in reinforcement learning (RL)
By replacing the image encoder with a DARTS supernet, our search method is sample-efficient, requires minimal extra compute resources, and is also compatible with off-policy and on-policy RL algorithms, needing only minor changes in preexisting code.
We show that the supernet gradually learns better cells, leading to alternative architectures which can be highly competitive against manually designed policies, but also verify previous design choices for RL policies.
arXiv Detail & Related papers (2021-06-04T03:08:43Z) - Improving Reinforcement Learning with Human Assistance: An Argument for
Human Subject Studies with HIPPO Gym [21.4215863934377]
Reinforcement learning (RL) is a popular machine learning paradigm for game playing, robotics control, and other sequential decision tasks.
This article introduces our new open-source RL framework, the Human Input Parsing Platform for Openai Gym (HIPPO Gym)
arXiv Detail & Related papers (2021-02-02T12:56:02Z) - Learning to Prune Deep Neural Networks via Reinforcement Learning [64.85939668308966]
PuRL is a deep reinforcement learning based algorithm for pruning neural networks.
It achieves sparsity and accuracy comparable to current state-of-the-art methods.
arXiv Detail & Related papers (2020-07-09T13:06:07Z) - MushroomRL: Simplifying Reinforcement Learning Research [60.70556446270147]
MushroomRL is an open-source Python library developed to simplify the process of implementing and running Reinforcement Learning (RL) experiments.
Compared to other available libraries, MushroomRL has been created with the purpose of providing a comprehensive and flexible framework to minimize the effort in implementing and testing novel RL methodologies.
arXiv Detail & Related papers (2020-01-04T17:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.