Blocks Assemble! Learning to Assemble with Large-Scale Structured
Reinforcement Learning
- URL: http://arxiv.org/abs/2203.13733v1
- Date: Tue, 15 Mar 2022 18:21:02 GMT
- Title: Blocks Assemble! Learning to Assemble with Large-Scale Structured
Reinforcement Learning
- Authors: Seyed Kamyar Seyed Ghasemipour, Daniel Freeman, Byron David, Shixiang
(Shane) Gu, Satoshi Kataoka, Igor Mordatch
- Abstract summary: Assembly of multi-part physical structures is a valuable end product for autonomous robotics.
We introduce a naturalistic physics-based environment with a set of connectable magnet blocks inspired by children's toy kits.
We find that the combination of large-scale reinforcement learning and graph-based policies is an effective recipe for training agents.
- Score: 23.85678777628229
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Assembly of multi-part physical structures is both a valuable end product for
autonomous robotics, as well as a valuable diagnostic task for open-ended
training of embodied intelligent agents. We introduce a naturalistic
physics-based environment with a set of connectable magnet blocks inspired by
children's toy kits. The objective is to assemble blocks into a succession of
target blueprints. Despite the simplicity of this objective, the compositional
nature of building diverse blueprints from a set of blocks leads to an
explosion of complexity in structures that agents encounter. Furthermore,
assembly stresses agents' multi-step planning, physical reasoning, and bimanual
coordination. We find that the combination of large-scale reinforcement
learning and graph-based policies -- surprisingly without any additional
complexity -- is an effective recipe for training agents that not only
generalize to complex unseen blueprints in a zero-shot manner, but even operate
in a reset-free setting without being trained to do so. Through extensive
experiments, we highlight the importance of large-scale training, structured
representations, contributions of multi-task vs. single-task learning, as well
as the effects of curriculums, and discuss qualitative behaviors of trained
agents.
Related papers
- AssemblyComplete: 3D Combinatorial Construction with Deep Reinforcement Learning [4.3507834596906125]
A critical goal in robotics is to teach robots to adapt to real-world collaborative tasks, particularly in automatic assembly.
This paper introduces 3D assembly completion, which is demonstrated using unit primitives (i.e., Lego bricks)
We propose a two-part deep reinforcement learning (DRL) framework that tackles teaching the robot to understand the objective of an incomplete assembly and learning a construction policy to complete the assembly.
arXiv Detail & Related papers (2024-10-20T18:51:17Z) - Reduce, Reuse, Recycle: Categories for Compositional Reinforcement Learning [19.821117942806474]
We view task composition through the prism of category theory.
The categorical properties of Markov decision processes untangle complex tasks into manageable sub-tasks.
Experimental results support the categorical theory of reinforcement learning.
arXiv Detail & Related papers (2024-08-23T21:23:22Z) - Modular Neural Network Policies for Learning In-Flight Object Catching
with a Robot Hand-Arm System [55.94648383147838]
We present a modular framework designed to enable a robot hand-arm system to learn how to catch flying objects.
Our framework consists of five core modules: (i) an object state estimator that learns object trajectory prediction, (ii) a catching pose quality network that learns to score and rank object poses for catching, (iii) a reaching control policy trained to move the robot hand to pre-catch poses, and (iv) a grasping control policy trained to perform soft catching motions.
We conduct extensive evaluations of our framework in simulation for each module and the integrated system, to demonstrate high success rates of in-flight
arXiv Detail & Related papers (2023-12-21T16:20:12Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - Policy Architectures for Compositional Generalization in Control [71.61675703776628]
We introduce a framework for modeling entity-based compositional structure in tasks.
Our policies are flexible and can be trained end-to-end without requiring any action primitives.
arXiv Detail & Related papers (2022-03-10T06:44:24Z) - Brick-by-Brick: Combinatorial Construction with Deep Reinforcement
Learning [52.85981207514049]
We introduce a novel formulation, complex construction, which requires a building agent to assemble unit primitives sequentially.
To construct a target object, we provide incomplete knowledge about the desired target (i.e., 2D images) instead of exact and explicit information to the agent.
We demonstrate that the proposed method successfully learns to construct an unseen object conditioned on a single image or multiple views of a target object.
arXiv Detail & Related papers (2021-10-29T01:09:51Z) - A Consciousness-Inspired Planning Agent for Model-Based Reinforcement
Learning [104.3643447579578]
We present an end-to-end, model-based deep reinforcement learning agent which dynamically attends to relevant parts of its state.
The design allows agents to learn to plan effectively, by attending to the relevant objects, leading to better out-of-distribution generalization.
arXiv Detail & Related papers (2021-06-03T19:35:19Z) - CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and
Transfer Learning [138.40338621974954]
CausalWorld is a benchmark for causal structure and transfer learning in a robotic manipulation environment.
Tasks consist of constructing 3D shapes from a given set of blocks - inspired by how children learn to build complex structures.
arXiv Detail & Related papers (2020-10-08T23:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.