Generalizing to New Tasks via One-Shot Compositional Subgoals
- URL: http://arxiv.org/abs/2205.07716v1
- Date: Mon, 16 May 2022 14:30:11 GMT
- Title: Generalizing to New Tasks via One-Shot Compositional Subgoals
- Authors: Xihan Bian and Oscar Mendez and Simon Hadfield
- Abstract summary: The ability to generalize to previously unseen tasks with little to no supervision is a key challenge in modern machine learning research.
We introduce CASE which attempts to address these issues by training an Imitation Learning agent using adaptive "near future" subgoals.
Our experiments show that the proposed approach consistently outperforms the previous state-of-the-art compositional Imitation Learning approach by 30%.
- Score: 23.15624959305799
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The ability to generalize to previously unseen tasks with little to no
supervision is a key challenge in modern machine learning research. It is also
a cornerstone of a future "General AI". Any artificially intelligent agent
deployed in a real world application, must adapt on the fly to unknown
environments. Researchers often rely on reinforcement and imitation learning to
provide online adaptation to new tasks, through trial and error learning.
However, this can be challenging for complex tasks which require many timesteps
or large numbers of subtasks to complete. These "long horizon" tasks suffer
from sample inefficiency and can require extremely long training times before
the agent can learn to perform the necessary longterm planning. In this work,
we introduce CASE which attempts to address these issues by training an
Imitation Learning agent using adaptive "near future" subgoals. These subgoals
are recalculated at each step using compositional arithmetic in a learned
latent representation space. In addition to improving learning efficiency for
standard long-term tasks, this approach also makes it possible to perform
one-shot generalization to previously unseen tasks, given only a single
reference trajectory for the task in a different environment. Our experiments
show that the proposed approach consistently outperforms the previous
state-of-the-art compositional Imitation Learning approach by 30%.
Related papers
- You Only Live Once: Single-Life Reinforcement Learning [124.1738675154651]
In many real-world situations, the goal might not be to learn a policy that can do the task repeatedly, but simply to perform a new task successfully once in a single trial.
We formalize this problem setting, where an agent must complete a task within a single episode without interventions.
We propose an algorithm, $Q$-weighted adversarial learning (QWALE), which employs a distribution matching strategy.
arXiv Detail & Related papers (2022-10-17T09:00:11Z) - Towards More Generalizable One-shot Visual Imitation Learning [81.09074706236858]
A general-purpose robot should be able to master a wide range of tasks and quickly learn a novel one by leveraging past experiences.
One-shot imitation learning (OSIL) approaches this goal by training an agent with (pairs of) expert demonstrations.
We push for a higher level of generalization ability by investigating a more ambitious multi-task setup.
arXiv Detail & Related papers (2021-10-26T05:49:46Z) - Hierarchical Skills for Efficient Exploration [70.62309286348057]
In reinforcement learning, pre-trained low-level skills have the potential to greatly facilitate exploration.
Prior knowledge of the downstream task is required to strike the right balance between generality (fine-grained control) and specificity (faster learning) in skill design.
We propose a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner.
arXiv Detail & Related papers (2021-10-20T22:29:32Z) - Hierarchical Few-Shot Imitation with Skill Transition Models [66.81252581083199]
Few-shot Imitation with Skill Transition Models (FIST) is an algorithm that extracts skills from offline data and utilizes them to generalize to unseen tasks.
We show that FIST is capable of generalizing to new tasks and substantially outperforms prior baselines in navigation experiments.
arXiv Detail & Related papers (2021-07-19T15:56:01Z) - Latent Skill Planning for Exploration and Transfer [49.25525932162891]
In this paper, we investigate how these two approaches can be integrated into a single reinforcement learning agent.
We leverage the idea of partial amortization for fast adaptation at test time.
We demonstrate the benefits of our design decisions across a suite of challenging locomotion tasks.
arXiv Detail & Related papers (2020-11-27T18:40:03Z) - Planning to Explore via Self-Supervised World Models [120.31359262226758]
Plan2Explore is a self-supervised reinforcement learning agent.
We present a new approach to self-supervised exploration and fast adaptation to new tasks.
Without any training supervision or task-specific interaction, Plan2Explore outperforms prior self-supervised exploration methods.
arXiv Detail & Related papers (2020-05-12T17:59:45Z) - Transforming task representations to perform novel tasks [12.008469282323492]
An important aspect of intelligence is the ability to adapt to a novel task without any direct experience (zero-shot)
We propose a general computational framework for adapting to novel tasks based on their relationship to prior tasks.
arXiv Detail & Related papers (2020-05-08T23:41:57Z) - Trying AGAIN instead of Trying Longer: Prior Learning for Automatic
Curriculum Learning [39.489869446313065]
A major challenge in the Deep RL (DRL) community is to train agents able to generalize over unseen situations.
We propose a two stage ACL approach where 1) a teacher algorithm first learns to train a DRL agent with a high-exploration curriculum, and then 2) distills learned priors from the first run to generate an "expert curriculum"
Besides demonstrating 50% improvements on average over the current state of the art, the objective of this work is to give a first example of a new research direction oriented towards refining ACL techniques over multiple learners.
arXiv Detail & Related papers (2020-04-07T07:30:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.