Multi-task curriculum learning in a complex, visual, hard-exploration
domain: Minecraft
- URL: http://arxiv.org/abs/2106.14876v1
- Date: Mon, 28 Jun 2021 17:50:40 GMT
- Title: Multi-task curriculum learning in a complex, visual, hard-exploration
domain: Minecraft
- Authors: Ingmar Kanitscheider, Joost Huizinga, David Farhi, William Hebgen
Guss, Brandon Houghton, Raul Sampedro, Peter Zhokhov, Bowen Baker, Adrien
Ecoffet, Jie Tang, Oleg Klimov, Jeff Clune
- Abstract summary: We explore curriculum learning in a complex, visual domain with many hard exploration challenges: Minecraft.
We find that learning progress is a reliable measure of learnability for automatically constructing an effective curriculum.
- Score: 18.845438529816004
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An important challenge in reinforcement learning is training agents that can
solve a wide variety of tasks. If tasks depend on each other (e.g. needing to
learn to walk before learning to run), curriculum learning can speed up
learning by focusing on the next best task to learn. We explore curriculum
learning in a complex, visual domain with many hard exploration challenges:
Minecraft. We find that learning progress (defined as a change in success
probability of a task) is a reliable measure of learnability for automatically
constructing an effective curriculum. We introduce a learning-progress based
curriculum and test it on a complex reinforcement learning problem (called
"Simon Says") where an agent is instructed to obtain a desired goal item. Many
of the required skills depend on each other. Experiments demonstrate that: (1)
a within-episode exploration bonus for obtaining new items improves
performance, (2) dynamically adjusting this bonus across training such that it
only applies to items the agent cannot reliably obtain yet further increases
performance, (3) the learning-progress based curriculum elegantly follows the
learning curve of the agent, and (4) when the learning-progress based
curriculum is combined with the dynamic exploration bonus it learns much more
efficiently and obtains far higher performance than uniform baselines. These
results suggest that combining intra-episode and across-training exploration
bonuses with learning progress creates a promising method for automated
curriculum generation, which may substantially increase our ability to train
more capable, generally intelligent agents.
Related papers
- SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - Teacher-student curriculum learning for reinforcement learning [1.7259824817932292]
Reinforcement learning (rl) is a popular paradigm for sequential decision making problems.
The sample inefficiency of deep reinforcement learning methods is a significant obstacle when applying rl to real-world problems.
We propose a teacher-student curriculum learning setting where we simultaneously train a teacher that selects tasks for the student while the student learns how to solve the selected task.
arXiv Detail & Related papers (2022-10-31T14:45:39Z) - Learning from Guided Play: A Scheduled Hierarchical Approach for
Improving Exploration in Adversarial Imitation Learning [7.51557557629519]
We present Learning from Guided Play (LfGP), a framework in which we leverage expert demonstrations of, in addition to a main task, multiple auxiliary tasks.
This affords many benefits: learning efficiency is improved for main tasks with challenging bottleneck transitions, expert data becomes reusable between tasks, and transfer learning through the reuse of learned auxiliary task models becomes possible.
arXiv Detail & Related papers (2021-12-16T14:58:08Z) - Hierarchical Skills for Efficient Exploration [70.62309286348057]
In reinforcement learning, pre-trained low-level skills have the potential to greatly facilitate exploration.
Prior knowledge of the downstream task is required to strike the right balance between generality (fine-grained control) and specificity (faster learning) in skill design.
We propose a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner.
arXiv Detail & Related papers (2021-10-20T22:29:32Z) - Latent Skill Planning for Exploration and Transfer [49.25525932162891]
In this paper, we investigate how these two approaches can be integrated into a single reinforcement learning agent.
We leverage the idea of partial amortization for fast adaptation at test time.
We demonstrate the benefits of our design decisions across a suite of challenging locomotion tasks.
arXiv Detail & Related papers (2020-11-27T18:40:03Z) - Curriculum Learning with Hindsight Experience Replay for Sequential
Object Manipulation Tasks [1.370633147306388]
We present an algorithm that combines curriculum learning with Hindsight Experience Replay (HER) to learn sequential object manipulation tasks.
The algorithm exploits the recurrent structure inherent in many object manipulation tasks and implements the entire learning process in the original simulation without adjusting it to each source task.
arXiv Detail & Related papers (2020-08-21T08:59:28Z) - Bridging the Imitation Gap by Adaptive Insubordination [88.35564081175642]
We show that when the teaching agent makes decisions with access to privileged information, this information is marginalized during imitation learning.
We propose 'Adaptive Insubordination' (ADVISOR) to address this gap.
ADVISOR dynamically weights imitation and reward-based reinforcement learning losses during training, enabling on-the-fly switching between imitation and exploration.
arXiv Detail & Related papers (2020-07-23T17:59:57Z) - ELSIM: End-to-end learning of reusable skills through intrinsic
motivation [0.0]
We present a novel reinforcement learning architecture which hierarchically learns and represents self-generated skills in an end-to-end way.
With this architecture, an agent focuses only on task-rewarded skills while keeping the learning process of skills bottom-up.
arXiv Detail & Related papers (2020-06-23T11:20:46Z) - Planning to Explore via Self-Supervised World Models [120.31359262226758]
Plan2Explore is a self-supervised reinforcement learning agent.
We present a new approach to self-supervised exploration and fast adaptation to new tasks.
Without any training supervision or task-specific interaction, Plan2Explore outperforms prior self-supervised exploration methods.
arXiv Detail & Related papers (2020-05-12T17:59:45Z) - Emergent Real-World Robotic Skills via Unsupervised Off-Policy
Reinforcement Learning [81.12201426668894]
We develop efficient reinforcement learning methods that acquire diverse skills without any reward function, and then repurpose these skills for downstream tasks.
We show that our proposed algorithm provides substantial improvement in learning efficiency, making reward-free real-world training feasible.
We also demonstrate that the learned skills can be composed using model predictive control for goal-oriented navigation, without any additional training.
arXiv Detail & Related papers (2020-04-27T17:38:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.