Environment Generation for Zero-Shot Compositional Reinforcement
Learning
- URL: http://arxiv.org/abs/2201.08896v1
- Date: Fri, 21 Jan 2022 21:35:01 GMT
- Title: Environment Generation for Zero-Shot Compositional Reinforcement
Learning
- Authors: Izzeddin Gur, Natasha Jaques, Yingjie Miao, Jongwook Choi, Manoj
Tiwari, Honglak Lee, Aleksandra Faust
- Abstract summary: Compositional Design of Environments (CoDE) trains a Generator agent to automatically build a series of compositional tasks tailored to the agent's current skill level.
We learn to generate environments composed of multiple pages or rooms, and train RL agents capable of completing wide-range of complex tasks in those environments.
CoDE yields 4x higher success rate than the strongest baseline, and demonstrates strong performance of real websites learned on 3500 primitive tasks.
- Score: 105.35258025210862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many real-world problems are compositional - solving them requires completing
interdependent sub-tasks, either in series or in parallel, that can be
represented as a dependency graph. Deep reinforcement learning (RL) agents
often struggle to learn such complex tasks due to the long time horizons and
sparse rewards. To address this problem, we present Compositional Design of
Environments (CoDE), which trains a Generator agent to automatically build a
series of compositional tasks tailored to the RL agent's current skill level.
This automatic curriculum not only enables the agent to learn more complex
tasks than it could have otherwise, but also selects tasks where the agent's
performance is weak, enhancing its robustness and ability to generalize
zero-shot to unseen tasks at test-time. We analyze why current environment
generation techniques are insufficient for the problem of generating
compositional tasks, and propose a new algorithm that addresses these issues.
Our results assess learning and generalization across multiple compositional
tasks, including the real-world problem of learning to navigate and interact
with web pages. We learn to generate environments composed of multiple pages or
rooms, and train RL agents capable of completing wide-range of complex tasks in
those environments. We contribute two new benchmark frameworks for generating
compositional tasks, compositional MiniGrid and gMiniWoB for web
navigation.CoDE yields 4x higher success rate than the strongest baseline, and
demonstrates strong performance of real websites learned on 3500 primitive
tasks.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.