NiLBS: Neural Inverse Linear Blend Skinning
- URL: http://arxiv.org/abs/2004.05980v1
- Date: Mon, 6 Apr 2020 20:46:37 GMT
- Title: NiLBS: Neural Inverse Linear Blend Skinning
- Authors: Timothy Jeruzalski, David I.W. Levin, Alec Jacobson, Paul Lalonde,
Mohammad Norouzi, Andrea Tagliasacchi
- Abstract summary: We introduce a method to invert the deformations undergone via traditional skinning techniques via a neural network parameterized by pose.
The ability to invert these deformations allows values (e.g., distance function, signed distance function, occupancy) to be pre-computed at rest pose, and then efficiently queried when the character is deformed.
- Score: 59.22647012489496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this technical report, we investigate efficient representations of
articulated objects (e.g. human bodies), which is an important problem in
computer vision and graphics. To deform articulated geometry, existing
approaches represent objects as meshes and deform them using "skinning"
techniques. The skinning operation allows a wide range of deformations to be
achieved with a small number of control parameters. This paper introduces a
method to invert the deformations undergone via traditional skinning techniques
via a neural network parameterized by pose. The ability to invert these
deformations allows values (e.g., distance function, signed distance function,
occupancy) to be pre-computed at rest pose, and then efficiently queried when
the character is deformed. We leave empirical evaluation of our approach to
future work.
Related papers
- Neural Implicit Shape Editing using Boundary Sensitivity [12.621108702820313]
We leverage boundary sensitivity to express how perturbations in parameters move the shape boundary.
With this, we perform geometric editing: finding a parameter update that best approximates a globally prescribed deformation.
arXiv Detail & Related papers (2023-04-24T13:04:15Z) - Deformable Surface Reconstruction via Riemannian Metric Preservation [9.74575494970697]
Estimating the pose of an object from a monocular image is an inverse problem fundamental in computer vision.
This paper presents an approach to inferring continuous deformable surfaces from a sequence of images.
arXiv Detail & Related papers (2022-12-22T10:45:08Z) - Reduced Representation of Deformation Fields for Effective Non-rigid
Shape Matching [26.77241999731105]
We present a novel approach for computing correspondences between non-rigid objects by exploiting a reduced representation of deformation fields.
By letting the network learn deformation parameters at a sparse set of positions in space (nodes), we reconstruct the continuous deformation field in a closed-form with guaranteed smoothness.
Our model has high expressive power and is able to capture complex deformations.
arXiv Detail & Related papers (2022-11-26T16:11:17Z) - Neural Implicit Representations for Physical Parameter Inference from a Single Video [49.766574469284485]
We propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) for modelling physical phenomena.
Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video.
The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic images.
arXiv Detail & Related papers (2022-04-29T11:55:35Z) - Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos [63.16888987770885]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce a pose-driven deformation field based on the linear blend skinning algorithm.
We show that our approach significantly outperforms recent human modeling methods.
arXiv Detail & Related papers (2022-03-15T17:56:59Z) - DeepMLS: Geometry-Aware Control Point Deformation [76.51312491336343]
We introduce DeepMLS, a space-based deformation technique, guided by a set of displaced control points.
We leverage the power of neural networks to inject the underlying shape geometry into the deformation parameters.
Our technique facilitates intuitive piecewise smooth deformations, which are well suited for manufactured objects.
arXiv Detail & Related papers (2022-01-05T23:55:34Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local
Elements [62.652588951757764]
Learning to model and reconstruct humans in clothing is challenging due to articulation, non-rigid deformation, and varying clothing types and topologies.
Recent work uses neural networks to parameterize local surface elements.
We present three key innovations: First, we deform surface elements based on a human body model.
Second, we address the limitations of existing neural surface elements by regressing local geometry from local features.
arXiv Detail & Related papers (2021-04-15T17:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.