Self-supervised Learning of Implicit Shape Representation with Dense
Correspondence for Deformable Objects
- URL: http://arxiv.org/abs/2308.12590v2
- Date: Mon, 25 Dec 2023 13:48:33 GMT
- Title: Self-supervised Learning of Implicit Shape Representation with Dense
Correspondence for Deformable Objects
- Authors: Baowen Zhang, Jiahe Li, Xiaoming Deng, Yinda Zhang, Cuixia Ma, Hongan
Wang
- Abstract summary: We propose a novel self-supervised approach to learn neural implicit shape representation for deformable objects.
Our method does not require the priors of skeleton and skinning weight, and only requires a collection of shapes represented in signed distance fields.
Our model can represent shapes with large deformations and can support two typical applications, such as texture transfer and shape editing.
- Score: 26.102490905989338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning 3D shape representation with dense correspondence for deformable
objects is a fundamental problem in computer vision. Existing approaches often
need additional annotations of specific semantic domain, e.g., skeleton poses
for human bodies or animals, which require extra annotation effort and suffer
from error accumulation, and they are limited to specific domain. In this
paper, we propose a novel self-supervised approach to learn neural implicit
shape representation for deformable objects, which can represent shapes with a
template shape and dense correspondence in 3D. Our method does not require the
priors of skeleton and skinning weight, and only requires a collection of
shapes represented in signed distance fields. To handle the large deformation,
we constrain the learned template shape in the same latent space with the
training shapes, design a new formulation of local rigid constraint that
enforces rigid transformation in local region and addresses local reflection
issue, and present a new hierarchical rigid constraint to reduce the ambiguity
due to the joint learning of template shape and correspondences. Extensive
experiments show that our model can represent shapes with large deformations.
We also show that our shape representation can support two typical
applications, such as texture transfer and shape editing, with competitive
performance. The code and models are available at
https://iscas3dv.github.io/deformshape
Related papers
- Explorable Mesh Deformation Subspaces from Unstructured Generative
Models [53.23510438769862]
Deep generative models of 3D shapes often feature continuous latent spaces that can be used to explore potential variations.
We present a method to explore variations among a given set of landmark shapes by constructing a mapping from an easily-navigable 2D exploration space to a subspace of a pre-trained generative model.
arXiv Detail & Related papers (2023-10-11T18:53:57Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - Identity-Disentangled Neural Deformation Model for Dynamic Meshes [8.826835863410109]
We learn a neural deformation model that disentangles identity-induced shape variations from pose-dependent deformations using implicit neural functions.
We propose two methods to integrate global pose alignment with our neural deformation model.
Our method also outperforms traditional skeleton-driven models in reconstructing surface details such as palm prints or tendons without limitations from a fixed template.
arXiv Detail & Related papers (2021-09-30T17:43:06Z) - Augmenting Implicit Neural Shape Representations with Explicit
Deformation Fields [95.39603371087921]
Implicit neural representation is a recent approach to learn shape collections as zero level-sets of neural networks.
We advocate deformation-aware regularization for implicit neural representations, aiming at producing plausible deformations as latent code changes.
arXiv Detail & Related papers (2021-08-19T22:07:08Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Joint Learning of 3D Shape Retrieval and Deformation [43.359465703912676]
We propose a novel technique for producing high-quality 3D models that match a given target object image or scan.
Our method is based on retrieving an existing shape from a database of 3D models and then deforming its parts to match the target shape.
arXiv Detail & Related papers (2021-01-19T22:49:41Z) - Deep Implicit Templates for 3D Shape Representation [70.9789507686618]
We propose a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations.
Our key idea is to formulate DIFs as conditional deformations of a template implicit function.
We show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision.
arXiv Detail & Related papers (2020-11-30T06:01:49Z) - Deformed Implicit Field: Modeling 3D Shapes with Learned Dense
Correspondence [30.849927968528238]
We propose a novel Deformed Implicit Field representation for modeling 3D shapes of a category.
Our neural network, dubbed DIF-Net, jointly learns a shape latent space and these fields for 3D objects belonging to a category.
Experiments show that DIF-Net not only produces high-fidelity 3D shapes but also builds high-quality dense correspondences across different shapes.
arXiv Detail & Related papers (2020-11-27T10:45:26Z) - PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape
Representations [75.42959184226702]
We present a new mid-level patch-based surface representation for object-agnostic training.
We show several applications of our new representation, including shape and partial point cloud completion.
arXiv Detail & Related papers (2020-08-04T15:34:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.