Identity-Disentangled Neural Deformation Model for Dynamic Meshes
- URL: http://arxiv.org/abs/2109.15299v1
- Date: Thu, 30 Sep 2021 17:43:06 GMT
- Title: Identity-Disentangled Neural Deformation Model for Dynamic Meshes
- Authors: Binbin Xu, Lingni Ma, Yuting Ye, Tanner Schmidt, Christopher D. Twigg,
Steven Lovegrove
- Abstract summary: We learn a neural deformation model that disentangles identity-induced shape variations from pose-dependent deformations using implicit neural functions.
We propose two methods to integrate global pose alignment with our neural deformation model.
Our method also outperforms traditional skeleton-driven models in reconstructing surface details such as palm prints or tendons without limitations from a fixed template.
- Score: 8.826835863410109
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural shape models can represent complex 3D shapes with a compact latent
space. When applied to dynamically deforming shapes such as the human hands,
however, they would need to preserve temporal coherence of the deformation as
well as the intrinsic identity of the subject. These properties are difficult
to regularize with manually designed loss functions. In this paper, we learn a
neural deformation model that disentangles the identity-induced shape
variations from pose-dependent deformations using implicit neural functions. We
perform template-free unsupervised learning on 3D scans without explicit mesh
correspondence or semantic correspondences of shapes across subjects. We can
then apply the learned model to reconstruct partial dynamic 4D scans of novel
subjects performing unseen actions. We propose two methods to integrate global
pose alignment with our neural deformation model. Experiments demonstrate the
efficacy of our method in the disentanglement of identities and pose. Our
method also outperforms traditional skeleton-driven models in reconstructing
surface details such as palm prints or tendons without limitations from a fixed
template.
Related papers
- ReshapeIT: Reliable Shape Interaction with Implicit Template for Anatomical Structure Reconstruction [59.971808117043366]
ReShapeIT represents an anatomical structure with an implicit template field shared within the same category.
It ensures the implicit template field generates valid templates by strengthening the constraint of the correspondence between the instance shape and the template shape.
A template Interaction Module is introduced to reconstruct unseen shapes by interacting the valid template shapes with the instance-wise latent codes.
arXiv Detail & Related papers (2023-12-11T07:09:32Z) - Self-supervised Learning of Implicit Shape Representation with Dense
Correspondence for Deformable Objects [26.102490905989338]
We propose a novel self-supervised approach to learn neural implicit shape representation for deformable objects.
Our method does not require the priors of skeleton and skinning weight, and only requires a collection of shapes represented in signed distance fields.
Our model can represent shapes with large deformations and can support two typical applications, such as texture transfer and shape editing.
arXiv Detail & Related papers (2023-08-24T06:38:33Z) - Dynamic Point Fields [30.029872787758705]
We present a dynamic point field model that combines the representational benefits of explicit point-based graphics with implicit deformation networks.
We show the advantages of our dynamic point field framework in terms of its representational power, learning efficiency, and robustness to out-of-distribution novel poses.
arXiv Detail & Related papers (2023-04-05T17:52:37Z) - CaDeX: Learning Canonical Deformation Coordinate Space for Dynamic
Surface Representation via Neural Homeomorphism [46.234728261236015]
We introduce Canonical Deformation Coordinate Space (CaDeX), a unified representation of both shape and nonrigid motion.
Our novel deformation representation and its implementation are simple, efficient, and guarantee cycle consistency.
We demonstrate state-of-the-art performance in modelling a wide range of deformable objects.
arXiv Detail & Related papers (2022-03-30T17:59:23Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - SPAMs: Structured Implicit Parametric Models [30.19414242608965]
We learn Structured-implicit PArametric Models (SPAMs) as a deformable object representation that structurally decomposes non-rigid object motion into part-based disentangled representations of shape and pose.
Experiments demonstrate that our part-aware shape and pose understanding lead to state-of-the-art performance in reconstruction and tracking of depth sequences of complex deforming object motion.
arXiv Detail & Related papers (2022-01-20T12:33:46Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local
Elements [62.652588951757764]
Learning to model and reconstruct humans in clothing is challenging due to articulation, non-rigid deformation, and varying clothing types and topologies.
Recent work uses neural networks to parameterize local surface elements.
We present three key innovations: First, we deform surface elements based on a human body model.
Second, we address the limitations of existing neural surface elements by regressing local geometry from local features.
arXiv Detail & Related papers (2021-04-15T17:59:39Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.