DexDeform: Dexterous Deformable Object Manipulation with Human
Demonstrations and Differentiable Physics
- URL: http://arxiv.org/abs/2304.03223v1
- Date: Mon, 27 Mar 2023 17:59:49 GMT
- Title: DexDeform: Dexterous Deformable Object Manipulation with Human
Demonstrations and Differentiable Physics
- Authors: Sizhe Li, Zhiao Huang, Tao Chen, Tao Du, Hao Su, Joshua B. Tenenbaum,
Chuang Gan
- Abstract summary: We propose a principled framework that abstracts dexterous manipulation skills from human demonstration.
We then train a skill model using demonstrations for planning over action abstractions in imagination.
To evaluate the effectiveness of our approach, we introduce a suite of six challenging dexterous deformable object manipulation tasks.
- Score: 97.75188532559952
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this work, we aim to learn dexterous manipulation of deformable objects
using multi-fingered hands. Reinforcement learning approaches for dexterous
rigid object manipulation would struggle in this setting due to the complexity
of physics interaction with deformable objects. At the same time, previous
trajectory optimization approaches with differentiable physics for deformable
manipulation would suffer from local optima caused by the explosion of contact
modes from hand-object interactions. To address these challenges, we propose
DexDeform, a principled framework that abstracts dexterous manipulation skills
from human demonstration and refines the learned skills with differentiable
physics. Concretely, we first collect a small set of human demonstrations using
teleoperation. And we then train a skill model using demonstrations for
planning over action abstractions in imagination. To explore the goal space, we
further apply augmentations to the existing deformable shapes in demonstrations
and use a gradient optimizer to refine the actions planned by the skill model.
Finally, we adopt the refined trajectories as new demonstrations for finetuning
the skill model. To evaluate the effectiveness of our approach, we introduce a
suite of six challenging dexterous deformable object manipulation tasks.
Compared with baselines, DexDeform is able to better explore and generalize
across novel goals unseen in the initial human demonstrations.
Related papers
- ReinDiffuse: Crafting Physically Plausible Motions with Reinforced Diffusion Model [9.525806425270428]
We present emphReinDiffuse that combines reinforcement learning with motion diffusion model to generate physically credible human motions.
Our method adapts Motion Diffusion Model to output a parameterized distribution of actions, making them compatible with reinforcement learning paradigms.
Our approach outperforms existing state-of-the-art models on two major datasets, HumanML3D and KIT-ML.
arXiv Detail & Related papers (2024-10-09T16:24:11Z) - DefGoalNet: Contextual Goal Learning from Demonstrations For Deformable
Object Manipulation [11.484820908345563]
We develop a novel neural network DefGoalNet to learn deformable object goal shapes.
We demonstrate our method's effectiveness on various robotic tasks, both in simulation and on a physical robot.
arXiv Detail & Related papers (2023-09-25T18:54:32Z) - SculptBot: Pre-Trained Models for 3D Deformable Object Manipulation [8.517406772939292]
State representation for materials that exhibit plastic behavior, like modeling clay or bread dough, is difficult because they permanently deform under stress and are constantly changing shape.
We propose a system that uses point clouds as the state representation and leverages pre-trained point cloud reconstruction Transformer to learn a latent dynamics model to predict material deformations given a grasp action.
arXiv Detail & Related papers (2023-09-15T19:27:44Z) - Learning to Transfer In-Hand Manipulations Using a Greedy Shape
Curriculum [79.6027464700869]
We show that natural and robust in-hand manipulation of simple objects in a dynamic simulation can be learned from a high quality motion capture example.
We propose a simple greedy curriculum search algorithm that can successfully apply to a range of objects such as a teapot, bunny, bottle, train, and elephant.
arXiv Detail & Related papers (2023-03-14T17:08:19Z) - DiffSkill: Skill Abstraction from Differentiable Physics for Deformable
Object Manipulations with Tools [96.38972082580294]
DiffSkill is a novel framework that uses a differentiable physics simulator for skill abstraction to solve deformable object manipulation tasks.
In particular, we first obtain short-horizon skills using individual tools from a gradient-based simulator.
We then learn a neural skill abstractor from the demonstration trajectories which takes RGBD images as input.
arXiv Detail & Related papers (2022-03-31T17:59:38Z) - ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object
Manipulation [135.10594078615952]
We introduce ACID, an action-conditional visual dynamics model for volumetric deformable objects.
A benchmark contains over 17,000 action trajectories with six types of plush toys and 78 variants.
Our model achieves the best performance in geometry, correspondence, and dynamics predictions.
arXiv Detail & Related papers (2022-03-14T04:56:55Z) - A Differentiable Recipe for Learning Visual Non-Prehensile Planar
Manipulation [63.1610540170754]
We focus on the problem of visual non-prehensile planar manipulation.
We propose a novel architecture that combines video decoding neural models with priors from contact mechanics.
We find that our modular and fully differentiable architecture performs better than learning-only methods on unseen objects and motions.
arXiv Detail & Related papers (2021-11-09T18:39:45Z) - Learning Predictive Representations for Deformable Objects Using
Contrastive Estimation [83.16948429592621]
We propose a new learning framework that jointly optimize both the visual representation model and the dynamics model.
We show substantial improvements over standard model-based learning techniques across our rope and cloth manipulation suite.
arXiv Detail & Related papers (2020-03-11T17:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.