Learning Skeletal Articulations with Neural Blend Shapes
- URL: http://arxiv.org/abs/2105.02451v1
- Date: Thu, 6 May 2021 05:58:13 GMT
- Title: Learning Skeletal Articulations with Neural Blend Shapes
- Authors: Peizhuo Li, Kfir Aberman, Rana Hanocka, Libin Liu, Olga
Sorkine-Hornung, Baoquan Chen
- Abstract summary: We develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure.
Our framework learns to rig and skin characters with the same articulation structure.
We propose neural blend shapes which improve the deformation quality in the joint regions.
- Score: 57.879030623284216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Animating a newly designed character using motion capture (mocap) data is a
long standing problem in computer animation. A key consideration is the
skeletal structure that should correspond to the available mocap data, and the
shape deformation in the joint regions, which often requires a tailored,
pose-specific refinement. In this work, we develop a neural technique for
articulating 3D characters using enveloping with a pre-defined skeletal
structure which produces high quality pose dependent deformations. Our
framework learns to rig and skin characters with the same articulation
structure (e.g., bipeds or quadrupeds), and builds the desired skeleton
hierarchy into the network architecture. Furthermore, we propose neural blend
shapes--a set of corrective pose-dependent shapes which improve the deformation
quality in the joint regions in order to address the notorious artifacts
resulting from standard rigging and skinning. Our system estimates neural blend
shapes for input meshes with arbitrary connectivity, as well as weighting
coefficients which are conditioned on the input joint rotations. Unlike recent
deep learning techniques which supervise the network with ground-truth rigging
and skinning parameters, our approach does not assume that the training data
has a specific underlying deformation model. Instead, during training, the
network observes deformed shapes and learns to infer the corresponding rig,
skin and blend shapes using indirect supervision. During inference, we
demonstrate that our network generalizes to unseen characters with arbitrary
mesh connectivity, including unrigged characters built by 3D artists.
Conforming to standard skeletal animation models enables direct plug-and-play
in standard animation software, as well as game engines.
Related papers
- Learning Localization of Body and Finger Animation Skeleton Joints on Three-Dimensional Models of Human Bodies [0.0]
Our work proposes one such solution to the problem of positioning body and finger animation skeleton joints within 3D models of human bodies.
By comparing our method with the state-of-the-art, we show that it is possible to achieve significantly better results with a simpler architecture.
arXiv Detail & Related papers (2024-07-11T13:16:02Z) - Pose Modulated Avatars from Video [22.395774558845336]
We develop a two-branch neural network that is adaptive and explicit in the frequency domain.
The first branch is a graph neural network that models correlations among body parts locally.
The second branch combines these correlation features to a set of global frequencies and then modulates the feature encoding.
arXiv Detail & Related papers (2023-08-23T06:49:07Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos [63.16888987770885]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce a pose-driven deformation field based on the linear blend skinning algorithm.
We show that our approach significantly outperforms recent human modeling methods.
arXiv Detail & Related papers (2022-03-15T17:56:59Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local
Elements [62.652588951757764]
Learning to model and reconstruct humans in clothing is challenging due to articulation, non-rigid deformation, and varying clothing types and topologies.
Recent work uses neural networks to parameterize local surface elements.
We present three key innovations: First, we deform surface elements based on a human body model.
Second, we address the limitations of existing neural surface elements by regressing local geometry from local features.
arXiv Detail & Related papers (2021-04-15T17:59:39Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z) - RigNet: Neural Rigging for Articulated Characters [34.46896139582373]
RigNet is an end-to-end automated method for producing animation rigs from input character models.
It predicts a skeleton that matches the animator expectations in joint placement and topology.
It also estimates surface skin weights based on the predicted skeleton.
arXiv Detail & Related papers (2020-05-01T18:12:44Z) - TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape
and Garment Style [43.99803542307155]
We present TailorNet, a neural model which predicts clothing deformation in 3D as a function of three factors: pose, shape and style.
Our hypothesis is that (even non-linear) combinations of examples smooth out high frequency components such as fine-wrinkles.
Several experiments demonstrate TailorNet produces more realistic results than prior work, and even generates temporally coherent deformations.
arXiv Detail & Related papers (2020-03-10T08:49:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.