Learning to Compose Hierarchical Object-Centric Controllers for Robotic
Manipulation
- URL: http://arxiv.org/abs/2011.04627v2
- Date: Fri, 13 Nov 2020 20:27:39 GMT
- Title: Learning to Compose Hierarchical Object-Centric Controllers for Robotic
Manipulation
- Authors: Mohit Sharma, Jacky Liang, Jialiang Zhao, Alex LaGrassa, Oliver
Kroemer
- Abstract summary: We propose using reinforcement learning to compose hierarchical object-centric controllers for manipulation tasks.
Experiments in both simulation and real world show how the proposed approach leads to improved sample efficiency, zero-shot generalization, and simulation-to-reality transfer without fine-tuning.
- Score: 26.24940293693809
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Manipulation tasks can often be decomposed into multiple subtasks performed
in parallel, e.g., sliding an object to a goal pose while maintaining contact
with a table. Individual subtasks can be achieved by task-axis controllers
defined relative to the objects being manipulated, and a set of object-centric
controllers can be combined in an hierarchy. In prior works, such combinations
are defined manually or learned from demonstrations. By contrast, we propose
using reinforcement learning to dynamically compose hierarchical object-centric
controllers for manipulation tasks. Experiments in both simulation and real
world show how the proposed approach leads to improved sample efficiency,
zero-shot generalization to novel test environments, and simulation-to-reality
transfer without fine-tuning.
Related papers
- Learning Reusable Manipulation Strategies [86.07442931141634]
Humans demonstrate an impressive ability to acquire and generalize manipulation "tricks"
We present a framework that enables machines to acquire such manipulation skills through a single demonstration and self-play.
These learned mechanisms and samplers can be seamlessly integrated into standard task and motion planners.
arXiv Detail & Related papers (2023-11-06T17:35:42Z) - Kinematic-aware Prompting for Generalizable Articulated Object
Manipulation with LLMs [53.66070434419739]
Generalizable articulated object manipulation is essential for home-assistant robots.
We propose a kinematic-aware prompting framework that prompts Large Language Models with kinematic knowledge of objects to generate low-level motion waypoints.
Our framework outperforms traditional methods on 8 categories seen and shows a powerful zero-shot capability for 8 unseen articulated object categories.
arXiv Detail & Related papers (2023-11-06T03:26:41Z) - Learning Extrinsic Dexterity with Parameterized Manipulation Primitives [8.7221770019454]
We learn a sequence of actions that utilize the environment to change the object's pose.
Our approach can control the object's state through exploiting interactions between the object, the gripper, and the environment.
We evaluate our approach on picking box-shaped objects of various weight, shape, and friction properties from a constrained table-top workspace.
arXiv Detail & Related papers (2023-10-26T21:28:23Z) - Composite Motion Learning with Task Control [0.6882042556551609]
We present a deep learning method for composite and task-driven motion control for physically simulated characters.
We learn decoupled motions for specific body parts from multiple reference motions simultaneously and directly by leveraging the use of multiple discriminators in a GAN-like setup.
We show the applicability of our approach on a variety of challenging multi-objective tasks involving both composite motion imitation and multiple goal-directed control.
arXiv Detail & Related papers (2023-05-05T05:02:41Z) - Neural Constraint Satisfaction: Hierarchical Abstraction for
Combinatorial Generalization in Object Rearrangement [75.9289887536165]
We present a hierarchical abstraction approach to uncover underlying entities.
We show how to learn a correspondence between intervening on states of entities in the agent's model and acting on objects in the environment.
We use this correspondence to develop a method for control that generalizes to different numbers and configurations of objects.
arXiv Detail & Related papers (2023-03-20T18:19:36Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - CLAS: Coordinating Multi-Robot Manipulation with Central Latent Action
Spaces [9.578169216444813]
This paper proposes an approach to coordinating multi-robot manipulation through learned latent action spaces that are shared across different agents.
We validate our method in simulated multi-robot manipulation tasks and demonstrate improvement over previous baselines in terms of sample efficiency and learning performance.
arXiv Detail & Related papers (2022-11-28T23:20:47Z) - Inferring Versatile Behavior from Demonstrations by Matching Geometric
Descriptors [72.62423312645953]
Humans intuitively solve tasks in versatile ways, varying their behavior in terms of trajectory-based planning and for individual steps.
Current Imitation Learning algorithms often only consider unimodal expert demonstrations and act in a state-action-based setting.
Instead, we combine a mixture of movement primitives with a distribution matching objective to learn versatile behaviors that match the expert's behavior and versatility.
arXiv Detail & Related papers (2022-10-17T16:42:59Z) - Learning Sensorimotor Primitives of Sequential Manipulation Tasks from
Visual Demonstrations [13.864448233719598]
This paper describes a new neural network-based framework for learning simultaneously low-level policies and high-level policies.
A key feature of the proposed approach is that the policies are learned directly from raw videos of task demonstrations.
Empirical results on object manipulation tasks with a robotic arm show that the proposed network can efficiently learn from real visual demonstrations to perform the tasks.
arXiv Detail & Related papers (2022-03-08T01:36:48Z) - Generalizing Object-Centric Task-Axes Controllers using Keypoints [15.427056235112152]
We learn modular task policies which compose object-centric task-axes controllers.
These task-axes controllers are parameterized by properties associated with underlying objects in the scene.
Our overall approach provides a simple, modular and yet powerful framework for learning manipulation tasks.
arXiv Detail & Related papers (2021-03-18T21:08:00Z) - Goal-Conditioned End-to-End Visuomotor Control for Versatile Skill
Primitives [89.34229413345541]
We propose a conditioning scheme which avoids pitfalls by learning the controller and its conditioning in an end-to-end manner.
Our model predicts complex action sequences based directly on a dynamic image representation of the robot motion.
We report significant improvements in task success over representative MPC and IL baselines.
arXiv Detail & Related papers (2020-03-19T15:04:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.