Learning to Transfer In-Hand Manipulations Using a Greedy Shape
Curriculum
- URL: http://arxiv.org/abs/2303.12726v1
- Date: Tue, 14 Mar 2023 17:08:19 GMT
- Title: Learning to Transfer In-Hand Manipulations Using a Greedy Shape
Curriculum
- Authors: Yunbo Zhang, Alexander Clegg, Sehoon Ha, Greg Turk, Yuting Ye
- Abstract summary: We show that natural and robust in-hand manipulation of simple objects in a dynamic simulation can be learned from a high quality motion capture example.
We propose a simple greedy curriculum search algorithm that can successfully apply to a range of objects such as a teapot, bunny, bottle, train, and elephant.
- Score: 79.6027464700869
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In-hand object manipulation is challenging to simulate due to complex contact
dynamics, non-repetitive finger gaits, and the need to indirectly control
unactuated objects. Further adapting a successful manipulation skill to new
objects with different shapes and physical properties is a similarly
challenging problem. In this work, we show that natural and robust in-hand
manipulation of simple objects in a dynamic simulation can be learned from a
high quality motion capture example via deep reinforcement learning with
careful designs of the imitation learning problem. We apply our approach on
both single-handed and two-handed dexterous manipulations of diverse object
shapes and motions. We then demonstrate further adaptation of the example
motion to a more complex shape through curriculum learning on intermediate
shapes morphed between the source and target object. While a naive curriculum
of progressive morphs often falls short, we propose a simple greedy curriculum
search algorithm that can successfully apply to a range of objects such as a
teapot, bunny, bottle, train, and elephant.
Related papers
- Self-Explainable Affordance Learning with Embodied Caption [63.88435741872204]
We introduce Self-Explainable Affordance learning (SEA) with embodied caption.
SEA enables robots to articulate their intentions and bridge the gap between explainable vision-language caption and visual affordance learning.
We propose a novel model to effectively combine affordance grounding with self-explanation in a simple but efficient manner.
arXiv Detail & Related papers (2024-04-08T15:22:38Z) - DefGoalNet: Contextual Goal Learning from Demonstrations For Deformable
Object Manipulation [11.484820908345563]
We develop a novel neural network DefGoalNet to learn deformable object goal shapes.
We demonstrate our method's effectiveness on various robotic tasks, both in simulation and on a physical robot.
arXiv Detail & Related papers (2023-09-25T18:54:32Z) - ArtiGrasp: Physically Plausible Synthesis of Bi-Manual Dexterous
Grasping and Articulation [29.999224233718927]
ArtiGrasp is a method to synthesize bi-manual hand-object interactions that include grasping and articulation.
Our framework unifies grasping and articulation within a single policy guided by a single hand pose reference.
We show that our method can generate motions with noisy hand-object pose estimates from an off-the-shelf image-based regressor.
arXiv Detail & Related papers (2023-09-07T17:53:20Z) - DexDeform: Dexterous Deformable Object Manipulation with Human
Demonstrations and Differentiable Physics [97.75188532559952]
We propose a principled framework that abstracts dexterous manipulation skills from human demonstration.
We then train a skill model using demonstrations for planning over action abstractions in imagination.
To evaluate the effectiveness of our approach, we introduce a suite of six challenging dexterous deformable object manipulation tasks.
arXiv Detail & Related papers (2023-03-27T17:59:49Z) - Collaborative Learning for Hand and Object Reconstruction with
Attention-guided Graph Convolution [49.10497573378427]
Estimating the pose and shape of hands and objects under interaction finds numerous applications including augmented and virtual reality.
Our algorithm is optimisation to object models, and it learns the physical rules governing hand-object interaction.
Experiments using four widely-used benchmarks show that our framework achieves beyond state-of-the-art accuracy in 3D pose estimation, as well as recovers dense 3D hand and object shapes.
arXiv Detail & Related papers (2022-04-27T17:00:54Z) - Learning Generalizable Dexterous Manipulation from Human Grasp
Affordance [11.060931225148936]
Dexterous manipulation with a multi-finger hand is one of the most challenging problems in robotics.
Recent progress in imitation learning has largely improved the sample efficiency compared to Reinforcement Learning.
We propose to learn dexterous manipulation using large-scale demonstrations with diverse 3D objects in a category.
arXiv Detail & Related papers (2022-04-05T16:26:22Z) - A Differentiable Recipe for Learning Visual Non-Prehensile Planar
Manipulation [63.1610540170754]
We focus on the problem of visual non-prehensile planar manipulation.
We propose a novel architecture that combines video decoding neural models with priors from contact mechanics.
We find that our modular and fully differentiable architecture performs better than learning-only methods on unseen objects and motions.
arXiv Detail & Related papers (2021-11-09T18:39:45Z) - SoftGym: Benchmarking Deep Reinforcement Learning for Deformable Object
Manipulation [15.477950393687836]
We present SoftGym, a set of open-source simulated benchmarks for manipulating deformable objects.
We evaluate a variety of algorithms on these tasks and highlight challenges for reinforcement learning algorithms.
arXiv Detail & Related papers (2020-11-14T03:46:59Z) - Unsupervised Shape and Pose Disentanglement for 3D Meshes [49.431680543840706]
We present a simple yet effective approach to learn disentangled shape and pose representations in an unsupervised setting.
We use a combination of self-consistency and cross-consistency constraints to learn pose and shape space from registered meshes.
We demonstrate the usefulness of learned representations through a number of tasks including pose transfer and shape retrieval.
arXiv Detail & Related papers (2020-07-22T11:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.