DeformerNet: Learning Bimanual Manipulation of 3D Deformable Objects
- URL: http://arxiv.org/abs/2305.04449v3
- Date: Mon, 19 Feb 2024 09:09:46 GMT
- Title: DeformerNet: Learning Bimanual Manipulation of 3D Deformable Objects
- Authors: Bao Thach, Brian Y. Cho, Shing-Hei Ho, Tucker Hermans, Alan Kuntz
- Abstract summary: Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object's shape.
Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models.
We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the manipulated object and a point cloud of the goal shape.
This shape embedding enables the robot to learn a visual servo controller that computes the desired robot end-effector action to
- Score: 13.138509669247508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Applications in fields ranging from home care to warehouse fulfillment to
surgical assistance require robots to reliably manipulate the shape of 3D
deformable objects. Analytic models of elastic, 3D deformable objects require
numerous parameters to describe the potentially infinite degrees of freedom
present in determining the object's shape. Previous attempts at performing 3D
shape control rely on hand-crafted features to represent the object shape and
require training of object-specific control models. We overcome these issues
through the use of our novel DeformerNet neural network architecture, which
operates on a partial-view point cloud of the manipulated object and a point
cloud of the goal shape to learn a low-dimensional representation of the object
shape. This shape embedding enables the robot to learn a visual servo
controller that computes the desired robot end-effector action to iteratively
deform the object toward the target shape. We demonstrate both in simulation
and on a physical robot that DeformerNet reliably generalizes to object shapes
and material stiffness not seen during training, including ex vivo chicken
muscle tissue. Crucially, using DeformerNet, the robot successfully
accomplishes three surgical sub-tasks: retraction (moving tissue aside to
access a site underneath it), tissue wrapping (a sub-task in procedures like
aortic stent placements), and connecting two tubular pieces of tissue (a
sub-task in anastomosis).
Related papers
- SUGAR: Pre-training 3D Visual Representations for Robotics [85.55534363501131]
We introduce a novel 3D pre-training framework for robotics named SUGAR.
SUGAR captures semantic, geometric and affordance properties of objects through 3D point clouds.
We show that SUGAR's 3D representation outperforms state-of-the-art 2D and 3D representations.
arXiv Detail & Related papers (2024-04-01T21:23:03Z) - Fast Point Cloud to Mesh Reconstruction for Deformable Object Tracking [6.003255659803736]
We develop a method that takes as input a template mesh which is the mesh of an object in its non-deformed state and a deformed point cloud of the same object.
Our trained model can perform mesh reconstruction and tracking at a rate of 58Hz on a template mesh of 3000 vertices and a deformed point cloud of 5000 points.
An instance of a downstream application can be the control algorithm for a robotic hand that requires online feedback from the state of the manipulated object.
arXiv Detail & Related papers (2023-11-05T19:59:36Z) - DefGoalNet: Contextual Goal Learning from Demonstrations For Deformable
Object Manipulation [11.484820908345563]
We develop a novel neural network DefGoalNet to learn deformable object goal shapes.
We demonstrate our method's effectiveness on various robotic tasks, both in simulation and on a physical robot.
arXiv Detail & Related papers (2023-09-25T18:54:32Z) - ShapeShift: Superquadric-based Object Pose Estimation for Robotic
Grasping [85.38689479346276]
Current techniques heavily rely on a reference 3D object, limiting their generalizability and making it expensive to expand to new object categories.
This paper proposes ShapeShift, a superquadric-based framework for object pose estimation that predicts the object's pose relative to a primitive shape which is fitted to the object.
arXiv Detail & Related papers (2023-04-10T20:55:41Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - Learning Visual Shape Control of Novel 3D Deformable Objects from
Partial-View Point Clouds [7.1659268120093635]
Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object's shape.
Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models.
We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the object being manipulated and a point cloud of the goal shape to learn a low-dimensional representation of the object shape.
arXiv Detail & Related papers (2021-10-10T02:34:57Z) - Object Wake-up: 3-D Object Reconstruction, Animation, and in-situ
Rendering from a Single Image [58.69732754597448]
Given a picture of a chair, could we extract the 3-D shape of the chair, animate its plausible articulations and motions, and render in-situ in its original image space?
We devise an automated approach to extract and manipulate articulated objects in single images.
arXiv Detail & Related papers (2021-08-05T16:20:12Z) - DeformerNet: A Deep Learning Approach to 3D Deformable Object
Manipulation [5.733365759103406]
We propose a novel approach to 3D deformable object manipulation leveraging a deep neural network called DeformerNet.
We explicitly use 3D point clouds as the state representation and apply Convolutional Neural Network on point clouds to learn the 3D features.
Once trained in an end-to-end fashion, DeformerNet directly maps the current point cloud of a deformable object, as well as a target point cloud shape, to the desired displacement in robot gripper position.
arXiv Detail & Related papers (2021-07-16T18:20:58Z) - 3D Neural Scene Representations for Visuomotor Control [78.79583457239836]
We learn models for dynamic 3D scenes purely from 2D visual observations.
A dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks.
arXiv Detail & Related papers (2021-07-08T17:49:37Z) - Learning to Rearrange Deformable Cables, Fabrics, and Bags with
Goal-Conditioned Transporter Networks [36.90218756798642]
Rearranging and manipulating deformable objects such as cables, fabrics, and bags is a long-standing challenge in robotic manipulation.
We develop a suite of simulated benchmarks with 1D, 2D, and 3D deformable structures.
We propose embedding goal-conditioning into Transporter Networks, a recently proposed model architecture for learning robotic manipulation.
arXiv Detail & Related papers (2020-12-06T22:21:54Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.