Improving Reinforcement Learning with Human Assistance: An Argument for
Human Subject Studies with HIPPO Gym
- URL: http://arxiv.org/abs/2102.02639v1
- Date: Tue, 2 Feb 2021 12:56:02 GMT
- Title: Improving Reinforcement Learning with Human Assistance: An Argument for
Human Subject Studies with HIPPO Gym
- Authors: Matthew E. Taylor, Nicholas Nissen, Yuan Wang, Neda Navidi
- Abstract summary: Reinforcement learning (RL) is a popular machine learning paradigm for game playing, robotics control, and other sequential decision tasks.
This article introduces our new open-source RL framework, the Human Input Parsing Platform for Openai Gym (HIPPO Gym)
- Score: 21.4215863934377
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Reinforcement learning (RL) is a popular machine learning paradigm for game
playing, robotics control, and other sequential decision tasks. However, RL
agents often have long learning times with high data requirements because they
begin by acting randomly. In order to better learn in complex tasks, this
article argues that an external teacher can often significantly help the RL
agent learn.
OpenAI Gym is a common framework for RL research, including a large number of
standard environments and agents, making RL research significantly more
accessible. This article introduces our new open-source RL framework, the Human
Input Parsing Platform for Openai Gym (HIPPO Gym), and the design decisions
that went into its creation. The goal of this platform is to facilitate
human-RL research, again lowering the bar so that more researchers can quickly
investigate different ways that human teachers could assist RL agents,
including learning from demonstrations, learning from feedback, or curriculum
learning.
Related papers
- Gymnasium: A Standard Interface for Reinforcement Learning Environments [5.7144222327514616]
Reinforcement Learning (RL) is a growing field that has the potential to revolutionize many areas of artificial intelligence.
Despite its promise, RL research is often hindered by the lack of standardization in environment and algorithm implementations.
Gymnasium is an open-source library that provides a standard API for RL environments.
arXiv Detail & Related papers (2024-07-24T06:35:05Z) - Abstracted Trajectory Visualization for Explainability in Reinforcement
Learning [2.1028463367241033]
Explainable AI (XAI) has demonstrated the potential to help reinforcement learning (RL) practitioners to understand how RL models work.
XAI for users who do not have RL expertise (non-RL experts) has not been studied sufficiently.
We argue that abstracted trajectories, that depicts transitions between the major states of the RL model, will be useful for non-RL experts to build a mental model of the agents.
arXiv Detail & Related papers (2024-02-05T21:17:44Z) - OpenRL: A Unified Reinforcement Learning Framework [19.12129820612253]
We present OpenRL, an advanced reinforcement learning (RL) framework.
It is designed to accommodate a diverse array of tasks, from single-agent challenges to complex multi-agent systems.
It integrates Natural Language Processing (NLP) with RL, enabling researchers to address a combination of RL training and language-centric tasks effectively.
arXiv Detail & Related papers (2023-12-20T12:04:06Z) - A Survey of Meta-Reinforcement Learning [69.76165430793571]
We cast the development of better RL algorithms as a machine learning problem itself in a process called meta-RL.
We discuss how, at a high level, meta-RL research can be clustered based on the presence of a task distribution and the learning budget available for each individual task.
We conclude by presenting the open problems on the path to making meta-RL part of the standard toolbox for a deep RL practitioner.
arXiv Detail & Related papers (2023-01-19T12:01:41Z) - JORLDY: a fully customizable open source framework for reinforcement
learning [3.1864456096282696]
Reinforcement Learning (RL) has been actively researched in both academic and industrial fields.
JORLDY provides more than 20 widely used RL algorithms which are implemented with Pytorch.
JORLDY supports multiple RL environments which include OpenAI gym, Unity ML-Agents, Mujoco, Super Mario Bros and Procgen.
arXiv Detail & Related papers (2022-04-11T06:28:27Z) - Automated Reinforcement Learning (AutoRL): A Survey and Open Problems [92.73407630874841]
Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL.
We provide a common taxonomy, discuss each area in detail and pose open problems which would be of interest to researchers going forward.
arXiv Detail & Related papers (2022-01-11T12:41:43Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - RL-DARTS: Differentiable Architecture Search for Reinforcement Learning [62.95469460505922]
We introduce RL-DARTS, one of the first applications of Differentiable Architecture Search (DARTS) in reinforcement learning (RL)
By replacing the image encoder with a DARTS supernet, our search method is sample-efficient, requires minimal extra compute resources, and is also compatible with off-policy and on-policy RL algorithms, needing only minor changes in preexisting code.
We show that the supernet gradually learns better cells, leading to alternative architectures which can be highly competitive against manually designed policies, but also verify previous design choices for RL policies.
arXiv Detail & Related papers (2021-06-04T03:08:43Z) - How to Train Your Robot with Deep Reinforcement Learning; Lessons We've
Learned [111.06812202454364]
We present a number of case studies involving robotic deep RL.
We discuss commonly perceived challenges in deep RL and how they have been addressed in these works.
We also provide an overview of other outstanding challenges, many of which are unique to the real-world robotics setting.
arXiv Detail & Related papers (2021-02-04T22:09:28Z) - EasyRL: A Simple and Extensible Reinforcement Learning Framework [3.2173369911280023]
EasyRL provides an interactive graphical user interface for users to train and evaluate RL agents.
EasyRL does not require programming knowledge for training and testing simple built-in RL agents.
EasyRL also supports custom RL agents and environments, which can be highly beneficial for RL researchers in evaluating and comparing their RL models.
arXiv Detail & Related papers (2020-08-04T17:02:56Z) - The NetHack Learning Environment [79.06395964379107]
We present the NetHack Learning Environment (NLE), a procedurally generated rogue-like environment for Reinforcement Learning research.
We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL.
We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration.
arXiv Detail & Related papers (2020-06-24T14:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.