SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes
- URL: http://arxiv.org/abs/2104.03953v1
- Date: Thu, 8 Apr 2021 17:54:59 GMT
- Title: SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes
- Authors: Xu Chen, Yufeng Zheng, Michael J. Black, Otmar Hilliges, Andreas
Geiger
- Abstract summary: We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
- Score: 117.76767853430243
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural implicit surface representations have emerged as a promising paradigm
to capture 3D shapes in a continuous and resolution-independent manner.
However, adapting them to articulated shapes is non-trivial. Existing
approaches learn a backward warp field that maps deformed to canonical points.
However, this is problematic since the backward warp field is pose dependent
and thus requires large amounts of data to learn. To address this, we introduce
SNARF, which combines the advantages of linear blend skinning (LBS) for
polygonal meshes with those of neural implicit surfaces by learning a forward
deformation field without direct supervision. This deformation field is defined
in canonical, pose-independent space, allowing for generalization to unseen
poses. Learning the deformation field from posed meshes alone is challenging
since the correspondences of deformed points are defined implicitly and may not
be unique under changes of topology. We propose a forward skinning model that
finds all canonical correspondences of any deformed point using iterative root
finding. We derive analytical gradients via implicit differentiation, enabling
end-to-end training from 3D meshes with bone transformations. Compared to
state-of-the-art neural implicit representations, our approach generalizes
better to unseen poses while preserving accuracy. We demonstrate our method in
challenging scenarios on (clothed) 3D humans in diverse and unseen poses.
Related papers
- Self-supervised Learning of Implicit Shape Representation with Dense
Correspondence for Deformable Objects [26.102490905989338]
We propose a novel self-supervised approach to learn neural implicit shape representation for deformable objects.
Our method does not require the priors of skeleton and skinning weight, and only requires a collection of shapes represented in signed distance fields.
Our model can represent shapes with large deformations and can support two typical applications, such as texture transfer and shape editing.
arXiv Detail & Related papers (2023-08-24T06:38:33Z) - Neural Shape Deformation Priors [14.14047635248036]
We present Neural Shape Deformation Priors, a novel method for shape manipulation.
We learn the deformation behavior based on the underlying geometric properties of a shape.
Our method can be applied to challenging deformations and generalizes well to unseen deformations.
arXiv Detail & Related papers (2022-10-11T17:03:25Z) - Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos [63.16888987770885]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce a pose-driven deformation field based on the linear blend skinning algorithm.
We show that our approach significantly outperforms recent human modeling methods.
arXiv Detail & Related papers (2022-03-15T17:56:59Z) - Learning Smooth Neural Functions via Lipschitz Regularization [92.42667575719048]
We introduce a novel regularization designed to encourage smooth latent spaces in neural fields.
Compared with prior Lipschitz regularized networks, ours is computationally fast and can be implemented in four lines of code.
arXiv Detail & Related papers (2022-02-16T21:24:54Z) - Identity-Disentangled Neural Deformation Model for Dynamic Meshes [8.826835863410109]
We learn a neural deformation model that disentangles identity-induced shape variations from pose-dependent deformations using implicit neural functions.
We propose two methods to integrate global pose alignment with our neural deformation model.
Our method also outperforms traditional skeleton-driven models in reconstructing surface details such as palm prints or tendons without limitations from a fixed template.
arXiv Detail & Related papers (2021-09-30T17:43:06Z) - Augmenting Implicit Neural Shape Representations with Explicit
Deformation Fields [95.39603371087921]
Implicit neural representation is a recent approach to learn shape collections as zero level-sets of neural networks.
We advocate deformation-aware regularization for implicit neural representations, aiming at producing plausible deformations as latent code changes.
arXiv Detail & Related papers (2021-08-19T22:07:08Z) - NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One
Go [109.88509362837475]
We present NeuroMorph, a new neural network architecture that takes as input two 3D shapes.
NeuroMorph produces smooth and point-to-point correspondences between them.
It works well for a large variety of input shapes, including non-isometric pairs from different object categories.
arXiv Detail & Related papers (2021-06-17T12:25:44Z) - Deep Implicit Templates for 3D Shape Representation [70.9789507686618]
We propose a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations.
Our key idea is to formulate DIFs as conditional deformations of a template implicit function.
We show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision.
arXiv Detail & Related papers (2020-11-30T06:01:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.