Joint Learning of 3D Shape Retrieval and Deformation
- URL: http://arxiv.org/abs/2101.07889v1
- Date: Tue, 19 Jan 2021 22:49:41 GMT
- Title: Joint Learning of 3D Shape Retrieval and Deformation
- Authors: Mikaela Angelina Uy, Vladimir G. Kim, Minhyuk Sung, Noam Aigerman,
Siddhartha Chaudhuri, Leonidas Guibas
- Abstract summary: We propose a novel technique for producing high-quality 3D models that match a given target object image or scan.
Our method is based on retrieving an existing shape from a database of 3D models and then deforming its parts to match the target shape.
- Score: 43.359465703912676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel technique for producing high-quality 3D models that match
a given target object image or scan. Our method is based on retrieving an
existing shape from a database of 3D models and then deforming its parts to
match the target shape. Unlike previous approaches that independently focus on
either shape retrieval or deformation, we propose a joint learning procedure
that simultaneously trains the neural deformation module along with the
embedding space used by the retrieval module. This enables our network to learn
a deformation-aware embedding space, so that retrieved models are more amenable
to match the target after an appropriate deformation. In fact, we use the
embedding space to guide the shape pairs used to train the deformation module,
so that it invests its capacity in learning deformations between meaningful
shape pairs. Furthermore, our novel part-aware deformation module can work with
inconsistent and diverse part-structures on the source shapes. We demonstrate
the benefits of our joint training not only on our novel framework, but also on
other state-of-the-art neural deformation modules proposed in recent years.
Lastly, we also show that our jointly-trained method outperforms a two-step
deformation-aware retrieval that uses direct optimization instead of neural
deformation or a pre-trained deformation module.
Related papers
- Self-supervised Learning of Implicit Shape Representation with Dense
Correspondence for Deformable Objects [26.102490905989338]
We propose a novel self-supervised approach to learn neural implicit shape representation for deformable objects.
Our method does not require the priors of skeleton and skinning weight, and only requires a collection of shapes represented in signed distance fields.
Our model can represent shapes with large deformations and can support two typical applications, such as texture transfer and shape editing.
arXiv Detail & Related papers (2023-08-24T06:38:33Z) - Neural Shape Deformation Priors [14.14047635248036]
We present Neural Shape Deformation Priors, a novel method for shape manipulation.
We learn the deformation behavior based on the underlying geometric properties of a shape.
Our method can be applied to challenging deformations and generalizes well to unseen deformations.
arXiv Detail & Related papers (2022-10-11T17:03:25Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - Identity-Disentangled Neural Deformation Model for Dynamic Meshes [8.826835863410109]
We learn a neural deformation model that disentangles identity-induced shape variations from pose-dependent deformations using implicit neural functions.
We propose two methods to integrate global pose alignment with our neural deformation model.
Our method also outperforms traditional skeleton-driven models in reconstructing surface details such as palm prints or tendons without limitations from a fixed template.
arXiv Detail & Related papers (2021-09-30T17:43:06Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - Shape Prior Deformation for Categorical 6D Object Pose and Size
Estimation [62.618227434286]
We present a novel learning approach to recover the 6D poses and sizes of unseen object instances from an RGB-D image.
We propose a deep network to reconstruct the 3D object model by explicitly modeling the deformation from a pre-learned categorical shape prior.
arXiv Detail & Related papers (2020-07-16T16:45:05Z) - Deformation-Aware 3D Model Embedding and Retrieval [37.538109895618156]
We introduce a new problem of retrieving 3D models that are deformable to a given query shape.
We propose a novel deep embedding approach that learns the asymmetric relationships by leveraging location-dependent egocentric distance fields.
arXiv Detail & Related papers (2020-04-02T19:10:57Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.