Learning Reusable Manipulation Strategies
- URL: http://arxiv.org/abs/2311.03293v1
- Date: Mon, 6 Nov 2023 17:35:42 GMT
- Title: Learning Reusable Manipulation Strategies
- Authors: Jiayuan Mao, Joshua B. Tenenbaum, Tom\'as Lozano-P\'erez, Leslie Pack
Kaelbling
- Abstract summary: Humans demonstrate an impressive ability to acquire and generalize manipulation "tricks"
We present a framework that enables machines to acquire such manipulation skills through a single demonstration and self-play.
These learned mechanisms and samplers can be seamlessly integrated into standard task and motion planners.
- Score: 86.07442931141634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans demonstrate an impressive ability to acquire and generalize
manipulation "tricks." Even from a single demonstration, such as using soup
ladles to reach for distant objects, we can apply this skill to new scenarios
involving different object positions, sizes, and categories (e.g., forks and
hammers). Additionally, we can flexibly combine various skills to devise
long-term plans. In this paper, we present a framework that enables machines to
acquire such manipulation skills, referred to as "mechanisms," through a single
demonstration and self-play. Our key insight lies in interpreting each
demonstration as a sequence of changes in robot-object and object-object
contact modes, which provides a scaffold for learning detailed samplers for
continuous parameters. These learned mechanisms and samplers can be seamlessly
integrated into standard task and motion planners, enabling their compositional
use.
Related papers
- Canonical mapping as a general-purpose object descriptor for robotic
manipulation [0.0]
We propose using canonical mapping as a near-universal and flexible object descriptor.
We demonstrate that common object representations can be derived from a single pre-trained canonical mapping model.
We perform a multi-stage experiment using two robot arms that demonstrate the robustness of the perception approach.
arXiv Detail & Related papers (2023-03-02T15:09:25Z) - ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically
Simulated Characters [123.88692739360457]
General-purpose motor skills enable humans to perform complex tasks.
These skills also provide powerful priors for guiding their behaviors when learning new tasks.
We present a framework for learning versatile and reusable skill embeddings for physically simulated characters.
arXiv Detail & Related papers (2022-05-04T06:13:28Z) - V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated
Objects [51.79035249464852]
We present a framework for learning multi-arm manipulation of articulated objects.
Our framework includes a variational generative model that learns contact point distribution over object rigid parts for each robot arm.
arXiv Detail & Related papers (2021-11-07T02:31:09Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - ManiSkill: Learning-from-Demonstrations Benchmark for Generalizable
Manipulation Skills [27.214053107733186]
We propose SAPIEN Manipulation Skill Benchmark (abbreviated as ManiSkill) for learning generalizable object manipulation skills.
ManiSkill supports object-level variations by utilizing a rich and diverse set of articulated objects.
ManiSkill can encourage the robot learning community to explore more on learning generalizable object manipulation skills.
arXiv Detail & Related papers (2021-07-30T08:20:22Z) - MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale [103.7609761511652]
We show how a large-scale collective robotic learning system can acquire a repertoire of behaviors simultaneously.
New tasks can be continuously instantiated from previously learned tasks.
We train and evaluate our system on a set of 12 real-world tasks with data collected from 7 robots.
arXiv Detail & Related papers (2021-04-16T16:38:02Z) - SKID RAW: Skill Discovery from Raw Trajectories [23.871402375721285]
It is desirable to only demonstrate full task executions instead of all individual skills.
We propose a novel approach that simultaneously learns to segment trajectories into reoccurring patterns.
The approach learns a skill conditioning that can be used to understand possible sequences of skills.
arXiv Detail & Related papers (2021-03-26T17:27:13Z) - Self-supervised Visual Reinforcement Learning with Object-centric
Representations [11.786249372283562]
We propose to use object-centric representations as a modular and structured observation space.
We show that the structure in the representations in combination with goal-conditioned attention policies helps the autonomous agent to discover and learn useful skills.
arXiv Detail & Related papers (2020-11-29T14:55:09Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.