ManiSkill: Learning-from-Demonstrations Benchmark for Generalizable
Manipulation Skills
- URL: http://arxiv.org/abs/2107.14483v1
- Date: Fri, 30 Jul 2021 08:20:22 GMT
- Title: ManiSkill: Learning-from-Demonstrations Benchmark for Generalizable
Manipulation Skills
- Authors: Tongzhou Mu, Zhan Ling, Fanbo Xiang, Derek Yang, Xuanlin Li, Stone
Tao, Zhiao Huang, Zhiwei Jia, Hao Su
- Abstract summary: We propose SAPIEN Manipulation Skill Benchmark (abbreviated as ManiSkill) for learning generalizable object manipulation skills.
ManiSkill supports object-level variations by utilizing a rich and diverse set of articulated objects.
ManiSkill can encourage the robot learning community to explore more on learning generalizable object manipulation skills.
- Score: 27.214053107733186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning generalizable manipulation skills is central for robots to achieve
task automation in environments with endless scene and object variations.
However, existing robot learning environments are limited in both scale and
diversity of 3D assets (especially of articulated objects), making it difficult
to train and evaluate the generalization ability of agents over novel objects.
In this work, we focus on object-level generalization and propose SAPIEN
Manipulation Skill Benchmark (abbreviated as ManiSkill), a large-scale
learning-from-demonstrations benchmark for articulated object manipulation with
visual input (point cloud and image). ManiSkill supports object-level
variations by utilizing a rich and diverse set of articulated objects, and each
task is carefully designed for learning manipulations on a single category of
objects. We equip ManiSkill with high-quality demonstrations to facilitate
learning-from-demonstrations approaches and perform evaluations on common
baseline algorithms. We believe ManiSkill can encourage the robot learning
community to explore more on learning generalizable object manipulation skills.
Related papers
- Unsupervised Skill Discovery for Robotic Manipulation through Automatic Task Generation [17.222197596599685]
We propose a Skill Learning approach that discovers composable behaviors by solving a large number of autonomously generated tasks.
Our method learns skills allowing the robot to consistently and robustly interact with objects in its environment.
The learned skills can be used to solve a set of unseen manipulation tasks, in simulation as well as on a real robotic platform.
arXiv Detail & Related papers (2024-10-07T09:19:13Z) - Learning Reusable Manipulation Strategies [86.07442931141634]
Humans demonstrate an impressive ability to acquire and generalize manipulation "tricks"
We present a framework that enables machines to acquire such manipulation skills through a single demonstration and self-play.
These learned mechanisms and samplers can be seamlessly integrated into standard task and motion planners.
arXiv Detail & Related papers (2023-11-06T17:35:42Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Learning Category-Level Generalizable Object Manipulation Policy via
Generative Adversarial Self-Imitation Learning from Demonstrations [14.001076951265558]
Generalizable object manipulation skills are critical for intelligent robots to work in real-world complex scenes.
In this work, we tackle this category-level object manipulation policy learning problem via imitation learning in a task-agnostic manner.
We propose several general but critical techniques, including generative adversarial self-imitation learning from demonstrations, progressive growing of discriminator, and instance-balancing for expert buffer.
arXiv Detail & Related papers (2022-03-04T02:52:02Z) - Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task
Learning [108.08083976908195]
We show that policies learned by existing reinforcement learning algorithms can in fact be generalist.
We show that a single generalist policy can perform in-hand manipulation of over 100 geometrically-diverse real-world objects.
Interestingly, we find that multi-task learning with object point cloud representations not only generalizes better but even outperforms single-object specialist policies.
arXiv Detail & Related papers (2021-11-04T17:59:56Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - SKID RAW: Skill Discovery from Raw Trajectories [23.871402375721285]
It is desirable to only demonstrate full task executions instead of all individual skills.
We propose a novel approach that simultaneously learns to segment trajectories into reoccurring patterns.
The approach learns a skill conditioning that can be used to understand possible sequences of skills.
arXiv Detail & Related papers (2021-03-26T17:27:13Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.