DefGoalNet: Contextual Goal Learning from Demonstrations For Deformable
Object Manipulation
- URL: http://arxiv.org/abs/2309.14463v1
- Date: Mon, 25 Sep 2023 18:54:32 GMT
- Title: DefGoalNet: Contextual Goal Learning from Demonstrations For Deformable
Object Manipulation
- Authors: Bao Thach, Tanner Watts, Shing-Hei Ho, Tucker Hermans, Alan Kuntz
- Abstract summary: We develop a novel neural network DefGoalNet to learn deformable object goal shapes.
We demonstrate our method's effectiveness on various robotic tasks, both in simulation and on a physical robot.
- Score: 11.484820908345563
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shape servoing, a robotic task dedicated to controlling objects to desired
goal shapes, is a promising approach to deformable object manipulation. An
issue arises, however, with the reliance on the specification of a goal shape.
This goal has been obtained either by a laborious domain knowledge engineering
process or by manually manipulating the object into the desired shape and
capturing the goal shape at that specific moment, both of which are impractical
in various robotic applications. In this paper, we solve this problem by
developing a novel neural network DefGoalNet, which learns deformable object
goal shapes directly from a small number of human demonstrations. We
demonstrate our method's effectiveness on various robotic tasks, both in
simulation and on a physical robot. Notably, in the surgical retraction task,
even when trained with as few as 10 demonstrations, our method achieves a
median success percentage of nearly 90%. These results mark a substantial
advancement in enabling shape servoing methods to bring deformable object
manipulation closer to practical, real-world applications.
Related papers
- ManiFoundation Model for General-Purpose Robotic Manipulation of Contact Synthesis with Arbitrary Objects and Robots [24.035706461949715]
There is a pressing need to develop a model that enables general-purpose robots to undertake a broad spectrum of manipulation tasks.
Our work introduces a comprehensive framework to develop a foundation model for general robotic manipulation.
Our model achieves average success rates of around 90%.
arXiv Detail & Related papers (2024-05-11T09:18:37Z) - Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - One-shot Imitation Learning via Interaction Warping [32.5466340846254]
We propose a new method, Interaction Warping, for learning SE(3) robotic manipulation policies from a single demonstration.
We infer the 3D mesh of each object in the environment using shape warping, a technique for aligning point clouds across object instances.
We show successful one-shot imitation learning on three simulated and real-world object re-arrangement tasks.
arXiv Detail & Related papers (2023-06-21T17:26:11Z) - DeformerNet: Learning Bimanual Manipulation of 3D Deformable Objects [13.138509669247508]
Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object's shape.
Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models.
We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the manipulated object and a point cloud of the goal shape.
This shape embedding enables the robot to learn a visual servo controller that computes the desired robot end-effector action to
arXiv Detail & Related papers (2023-05-08T04:08:06Z) - DexDeform: Dexterous Deformable Object Manipulation with Human
Demonstrations and Differentiable Physics [97.75188532559952]
We propose a principled framework that abstracts dexterous manipulation skills from human demonstration.
We then train a skill model using demonstrations for planning over action abstractions in imagination.
To evaluate the effectiveness of our approach, we introduce a suite of six challenging dexterous deformable object manipulation tasks.
arXiv Detail & Related papers (2023-03-27T17:59:49Z) - Learning to Transfer In-Hand Manipulations Using a Greedy Shape
Curriculum [79.6027464700869]
We show that natural and robust in-hand manipulation of simple objects in a dynamic simulation can be learned from a high quality motion capture example.
We propose a simple greedy curriculum search algorithm that can successfully apply to a range of objects such as a teapot, bunny, bottle, train, and elephant.
arXiv Detail & Related papers (2023-03-14T17:08:19Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Learning Visual Shape Control of Novel 3D Deformable Objects from
Partial-View Point Clouds [7.1659268120093635]
Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object's shape.
Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models.
We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the object being manipulated and a point cloud of the goal shape to learn a low-dimensional representation of the object shape.
arXiv Detail & Related papers (2021-10-10T02:34:57Z) - Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic
Platforms [60.59764170868101]
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform.
We formulate it as a few-shot meta-learning problem where the goal is to find a model that captures the common structure shared across different robotic platforms.
We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots.
arXiv Detail & Related papers (2021-03-05T14:16:20Z) - Model-Based Visual Planning with Self-Supervised Functional Distances [104.83979811803466]
We present a self-supervised method for model-based visual goal reaching.
Our approach learns entirely using offline, unlabeled data.
We find that this approach substantially outperforms both model-free and model-based prior methods.
arXiv Detail & Related papers (2020-12-30T23:59:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.