Learning Visual Shape Control of Novel 3D Deformable Objects from
Partial-View Point Clouds
- URL: http://arxiv.org/abs/2110.04685v1
- Date: Sun, 10 Oct 2021 02:34:57 GMT
- Title: Learning Visual Shape Control of Novel 3D Deformable Objects from
Partial-View Point Clouds
- Authors: Bao Thach, Brian Y. Cho, Alan Kuntz, Tucker Hermans
- Abstract summary: Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object's shape.
Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models.
We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the object being manipulated and a point cloud of the goal shape to learn a low-dimensional representation of the object shape.
- Score: 7.1659268120093635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: If robots could reliably manipulate the shape of 3D deformable objects, they
could find applications in fields ranging from home care to warehouse
fulfillment to surgical assistance. Analytic models of elastic, 3D deformable
objects require numerous parameters to describe the potentially infinite
degrees of freedom present in determining the object's shape. Previous attempts
at performing 3D shape control rely on hand-crafted features to represent the
object shape and require training of object-specific control models. We
overcome these issues through the use of our novel DeformerNet neural network
architecture, which operates on a partial-view point cloud of the object being
manipulated and a point cloud of the goal shape to learn a low-dimensional
representation of the object shape. This shape embedding enables the robot to
learn to define a visual servo controller that provides Cartesian pose changes
to the robot end-effector causing the object to deform towards its target
shape. Crucially, we demonstrate both in simulation and on a physical robot
that DeformerNet reliably generalizes to object shapes and material stiffness
not seen during training and outperforms comparison methods for both the
generic shape control and the surgical task of retraction.
Related papers
- Fast Point Cloud to Mesh Reconstruction for Deformable Object Tracking [6.003255659803736]
We develop a method that takes as input a template mesh which is the mesh of an object in its non-deformed state and a deformed point cloud of the same object.
Our trained model can perform mesh reconstruction and tracking at a rate of 58Hz on a template mesh of 3000 vertices and a deformed point cloud of 5000 points.
An instance of a downstream application can be the control algorithm for a robotic hand that requires online feedback from the state of the manipulated object.
arXiv Detail & Related papers (2023-11-05T19:59:36Z) - SculptBot: Pre-Trained Models for 3D Deformable Object Manipulation [8.517406772939292]
State representation for materials that exhibit plastic behavior, like modeling clay or bread dough, is difficult because they permanently deform under stress and are constantly changing shape.
We propose a system that uses point clouds as the state representation and leverages pre-trained point cloud reconstruction Transformer to learn a latent dynamics model to predict material deformations given a grasp action.
arXiv Detail & Related papers (2023-09-15T19:27:44Z) - DeformerNet: Learning Bimanual Manipulation of 3D Deformable Objects [13.138509669247508]
Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object's shape.
Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models.
We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the manipulated object and a point cloud of the goal shape.
This shape embedding enables the robot to learn a visual servo controller that computes the desired robot end-effector action to
arXiv Detail & Related papers (2023-05-08T04:08:06Z) - ShapeShift: Superquadric-based Object Pose Estimation for Robotic
Grasping [85.38689479346276]
Current techniques heavily rely on a reference 3D object, limiting their generalizability and making it expensive to expand to new object categories.
This paper proposes ShapeShift, a superquadric-based framework for object pose estimation that predicts the object's pose relative to a primitive shape which is fitted to the object.
arXiv Detail & Related papers (2023-04-10T20:55:41Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - Object Wake-up: 3-D Object Reconstruction, Animation, and in-situ
Rendering from a Single Image [58.69732754597448]
Given a picture of a chair, could we extract the 3-D shape of the chair, animate its plausible articulations and motions, and render in-situ in its original image space?
We devise an automated approach to extract and manipulate articulated objects in single images.
arXiv Detail & Related papers (2021-08-05T16:20:12Z) - DeformerNet: A Deep Learning Approach to 3D Deformable Object
Manipulation [5.733365759103406]
We propose a novel approach to 3D deformable object manipulation leveraging a deep neural network called DeformerNet.
We explicitly use 3D point clouds as the state representation and apply Convolutional Neural Network on point clouds to learn the 3D features.
Once trained in an end-to-end fashion, DeformerNet directly maps the current point cloud of a deformable object, as well as a target point cloud shape, to the desired displacement in robot gripper position.
arXiv Detail & Related papers (2021-07-16T18:20:58Z) - 3D Neural Scene Representations for Visuomotor Control [78.79583457239836]
We learn models for dynamic 3D scenes purely from 2D visual observations.
A dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks.
arXiv Detail & Related papers (2021-07-08T17:49:37Z) - From Points to Multi-Object 3D Reconstruction [71.17445805257196]
We propose a method to detect and reconstruct multiple 3D objects from a single RGB image.
A keypoint detector localizes objects as center points and directly predicts all object properties, including 9-DoF bounding boxes and 3D shapes.
The presented approach performs lightweight reconstruction in a single-stage, it is real-time capable, fully differentiable and end-to-end trainable.
arXiv Detail & Related papers (2020-12-21T18:52:21Z) - Learning to Rearrange Deformable Cables, Fabrics, and Bags with
Goal-Conditioned Transporter Networks [36.90218756798642]
Rearranging and manipulating deformable objects such as cables, fabrics, and bags is a long-standing challenge in robotic manipulation.
We develop a suite of simulated benchmarks with 1D, 2D, and 3D deformable structures.
We propose embedding goal-conditioning into Transporter Networks, a recently proposed model architecture for learning robotic manipulation.
arXiv Detail & Related papers (2020-12-06T22:21:54Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.