Learning Task Decomposition with Ordered Memory Policy Network
- URL: http://arxiv.org/abs/2103.10972v1
- Date: Fri, 19 Mar 2021 18:13:35 GMT
- Title: Learning Task Decomposition with Ordered Memory Policy Network
- Authors: Yuchen Lu, Yikang Shen, Siyuan Zhou, Aaron Courville, Joshua B.
Tenenbaum, Chuang Gan
- Abstract summary: We propose Ordered Memory Policy Network (OMPN) to discover subtask hierarchy by learning from demonstration.
OMPN can be applied to partially observable environments and still achieve higher task decomposition performance.
Our visualization confirms that the subtask hierarchy can emerge in our model.
- Score: 73.3813423684999
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Many complex real-world tasks are composed of several levels of sub-tasks.
Humans leverage these hierarchical structures to accelerate the learning
process and achieve better generalization. In this work, we study the inductive
bias and propose Ordered Memory Policy Network (OMPN) to discover subtask
hierarchy by learning from demonstration. The discovered subtask hierarchy
could be used to perform task decomposition, recovering the subtask boundaries
in an unstruc-tured demonstration. Experiments on Craft and Dial demonstrate
that our modelcan achieve higher task decomposition performance under both
unsupervised and weakly supervised settings, comparing with strong baselines.
OMPN can also bedirectly applied to partially observable environments and still
achieve higher task decomposition performance. Our visualization further
confirms that the subtask hierarchy can emerge in our model.
Related papers
- Identifying Selections for Unsupervised Subtask Discovery [12.22188797558089]
We provide a theory to identify, and experiments to verify the existence of selection variables in data.
These selections serve as subgoals that indicate subtasks and guide policy.
In light of this idea, we develop a sequential non-negative matrix factorization (seq- NMF) method to learn these subgoals and extract meaningful behavior patterns as subtasks.
arXiv Detail & Related papers (2024-10-28T23:47:43Z) - On the benefits of pixel-based hierarchical policies for task generalization [7.207480346660617]
Reinforcement learning practitioners often avoid hierarchical policies, especially in image-based observation spaces.
We analyze the benefits of hierarchy through simulated multi-task robotic control experiments from pixels.
arXiv Detail & Related papers (2024-07-27T01:26:26Z) - Neural Sculpting: Uncovering hierarchically modular task structure in
neural networks through pruning and network analysis [8.080026425139708]
We show that hierarchically modular neural networks offer benefits such as learning efficiency, generalization, multi-task learning, and transfer.
We propose an approach based on iterative unit and edge pruning (during training), combined with network analysis for module detection and hierarchy inference.
arXiv Detail & Related papers (2023-05-28T15:12:32Z) - Decomposed Prompting: A Modular Approach for Solving Complex Tasks [55.42850359286304]
We propose Decomposed Prompting to solve complex tasks by decomposing them (via prompting) into simpler sub-tasks.
This modular structure allows each prompt to be optimized for its specific sub-task.
We show that the flexibility and modularity of Decomposed Prompting allows it to outperform prior work on few-shot prompting.
arXiv Detail & Related papers (2022-10-05T17:28:20Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - LDSA: Learning Dynamic Subtask Assignment in Cooperative Multi-Agent
Reinforcement Learning [122.47938710284784]
We propose a novel framework for learning dynamic subtask assignment (LDSA) in cooperative MARL.
To reasonably assign agents to different subtasks, we propose an ability-based subtask selection strategy.
We show that LDSA learns reasonable and effective subtask assignment for better collaboration.
arXiv Detail & Related papers (2022-05-05T10:46:16Z) - Learning Functionally Decomposed Hierarchies for Continuous Control
Tasks with Path Planning [36.050432925402845]
We present HiDe, a novel hierarchical reinforcement learning architecture that successfully solves long horizon control tasks.
We experimentally show that our method generalizes across unseen test environments and can scale to 3x horizon length compared to both learning and non-learning based methods.
arXiv Detail & Related papers (2020-02-14T10:19:52Z) - Hierarchical Reinforcement Learning as a Model of Human Task
Interleaving [60.95424607008241]
We develop a hierarchical model of supervisory control driven by reinforcement learning.
The model reproduces known empirical effects of task interleaving.
The results support hierarchical RL as a plausible model of task interleaving.
arXiv Detail & Related papers (2020-01-04T17:53:28Z) - Meta Reinforcement Learning with Autonomous Inference of Subtask
Dependencies [57.27944046925876]
We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph.
Instead of directly learning a meta-policy, we develop a Meta-learner with Subtask Graph Inference.
Our experiment results on two grid-world domains and StarCraft II environments show that the proposed method is able to accurately infer the latent task parameter.
arXiv Detail & Related papers (2020-01-01T17:34:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.