Residual Skill Policies: Learning an Adaptable Skill-based Action Space
for Reinforcement Learning for Robotics
- URL: http://arxiv.org/abs/2211.02231v1
- Date: Fri, 4 Nov 2022 02:42:17 GMT
- Title: Residual Skill Policies: Learning an Adaptable Skill-based Action Space
for Reinforcement Learning for Robotics
- Authors: Krishan Rana, Ming Xu, Brendan Tidd, Michael Milford and Niko
S\"underhauf
- Abstract summary: Skill-based reinforcement learning (RL) has emerged as a promising strategy to leverage prior knowledge for accelerated robot learning.
We propose accelerating exploration in the skill space using state-conditioned generative models.
We validate our approach across four challenging manipulation tasks, demonstrating our ability to learn across task variations.
- Score: 18.546688182454236
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Skill-based reinforcement learning (RL) has emerged as a promising strategy
to leverage prior knowledge for accelerated robot learning. Skills are
typically extracted from expert demonstrations and are embedded into a latent
space from which they can be sampled as actions by a high-level RL agent.
However, this skill space is expansive, and not all skills are relevant for a
given robot state, making exploration difficult. Furthermore, the downstream RL
agent is limited to learning structurally similar tasks to those used to
construct the skill space. We firstly propose accelerating exploration in the
skill space using state-conditioned generative models to directly bias the
high-level agent towards only sampling skills relevant to a given state based
on prior experience. Next, we propose a low-level residual policy for
fine-grained skill adaptation enabling downstream RL agents to adapt to unseen
task variations. Finally, we validate our approach across four challenging
manipulation tasks that differ from those used to build the skill space,
demonstrating our ability to learn across task variations while significantly
accelerating exploration, outperforming prior works. Code and videos are
available on our project website: https://krishanrana.github.io/reskill.
Related papers
- EXTRACT: Efficient Policy Learning by Extracting Transferable Robot Skills from Offline Data [22.471559284344462]
Most reinforcement learning (RL) methods focus on learning optimal policies over low-level action spaces.
While these methods can perform well in their training environments, they lack the flexibility to transfer to new tasks.
We demonstrate through experiments in sparse, image-based, robot manipulation environments that can more quickly learn new tasks than prior works.
arXiv Detail & Related papers (2024-06-25T17:50:03Z) - Agentic Skill Discovery [19.5703917813767]
Language-conditioned robotic skills make it possible to apply the high-level reasoning of Large Language Models (LLMs) to low-level robotic control.
A remaining challenge is to acquire a diverse set of fundamental skills.
We introduce a novel framework for skill discovery that is entirely driven by LLMs.
arXiv Detail & Related papers (2024-05-23T19:44:03Z) - Acquiring Diverse Skills using Curriculum Reinforcement Learning with Mixture of Experts [58.220879689376744]
Reinforcement learning (RL) is a powerful approach for acquiring a good-performing policy.
We propose textbfDiverse textbfSkill textbfLearning (Di-SkilL) for learning diverse skills.
We show on challenging robot simulation tasks that Di-SkilL can learn diverse and performant skills.
arXiv Detail & Related papers (2024-03-11T17:49:18Z) - Bootstrap Your Own Skills: Learning to Solve New Tasks with Large
Language Model Guidance [66.615355754712]
BOSS learns to accomplish new tasks by performing "skill bootstrapping"
We demonstrate through experiments in realistic household environments that agents trained with our LLM-guided bootstrapping procedure outperform those trained with naive bootstrapping.
arXiv Detail & Related papers (2023-10-16T02:43:47Z) - Human-Timescale Adaptation in an Open-Ended Task Space [56.55530165036327]
We show that training an RL agent at scale leads to a general in-context learning algorithm that can adapt to open-ended novel embodied 3D problems as quickly as humans.
Our results lay the foundation for increasingly general and adaptive RL agents that perform well across ever-larger open-ended domains.
arXiv Detail & Related papers (2023-01-18T15:39:21Z) - Choreographer: Learning and Adapting Skills in Imagination [60.09911483010824]
We present Choreographer, a model-based agent that exploits its world model to learn and adapt skills in imagination.
Our method decouples the exploration and skill learning processes, being able to discover skills in the latent state space of the model.
Choreographer is able to learn skills both from offline data, and by collecting data simultaneously with an exploration policy.
arXiv Detail & Related papers (2022-11-23T23:31:14Z) - Hierarchical Kickstarting for Skill Transfer in Reinforcement Learning [27.69559938165733]
Practising and honing skills forms a fundamental component of how humans learn, yet artificial agents are rarely specifically trained to perform them.
We investigate how skills can be incorporated into the training of reinforcement learning (RL) agents in complex environments.
Our experiments show that learning with a prior knowledge of useful skills can significantly improve the performance of agents on complex problems.
arXiv Detail & Related papers (2022-07-23T19:23:29Z) - Hierarchical Skills for Efficient Exploration [70.62309286348057]
In reinforcement learning, pre-trained low-level skills have the potential to greatly facilitate exploration.
Prior knowledge of the downstream task is required to strike the right balance between generality (fine-grained control) and specificity (faster learning) in skill design.
We propose a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner.
arXiv Detail & Related papers (2021-10-20T22:29:32Z) - Example-Driven Model-Based Reinforcement Learning for Solving
Long-Horizon Visuomotor Tasks [85.56153200251713]
We introduce EMBR, a model-based RL method for learning primitive skills that are suitable for completing long-horizon visuomotor tasks.
On a Franka Emika robot arm, we find that EMBR enables the robot to complete three long-horizon visuomotor tasks at 85% success rate.
arXiv Detail & Related papers (2021-09-21T16:48:07Z) - Accelerating Reinforcement Learning with Learned Skill Priors [20.268358783821487]
Most modern reinforcement learning approaches learn every task from scratch.
One approach for leveraging prior knowledge is to transfer skills learned on prior tasks to the new task.
We show that learned skill priors are essential for effective skill transfer from rich datasets.
arXiv Detail & Related papers (2020-10-22T17:59:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.