RigNet: Neural Rigging for Articulated Characters
- URL: http://arxiv.org/abs/2005.00559v2
- Date: Sun, 5 Jul 2020 19:38:56 GMT
- Title: RigNet: Neural Rigging for Articulated Characters
- Authors: Zhan Xu, Yang Zhou, Evangelos Kalogerakis, Chris Landreth and Karan
Singh
- Abstract summary: RigNet is an end-to-end automated method for producing animation rigs from input character models.
It predicts a skeleton that matches the animator expectations in joint placement and topology.
It also estimates surface skin weights based on the predicted skeleton.
- Score: 34.46896139582373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present RigNet, an end-to-end automated method for producing animation
rigs from input character models. Given an input 3D model representing an
articulated character, RigNet predicts a skeleton that matches the animator
expectations in joint placement and topology. It also estimates surface skin
weights based on the predicted skeleton. Our method is based on a deep
architecture that directly operates on the mesh representation without making
assumptions on shape class and structure. The architecture is trained on a
large and diverse collection of rigged models, including their mesh, skeletons
and corresponding skin weights. Our evaluation is three-fold: we show better
results than prior art when quantitatively compared to animator rigs;
qualitatively we show that our rigs can be expressively posed and animated at
multiple levels of detail; and finally, we evaluate the impact of various
algorithm choices on our output rigs.
Related papers
- RigAnything: Template-Free Autoregressive Rigging for Diverse 3D Assets [47.81216915952291]
We present RigAnything, a novel autoregressive transformer-based model.
It makes 3D assets rig-ready by probabilistically generating joints, skeleton topologies, and assigning skinning weights in a template-free manner.
RigAnything demonstrates state-of-the-art performance across diverse object types, including humanoids, quadrupeds, marine creatures, insects, and many more.
arXiv Detail & Related papers (2025-02-13T18:59:13Z) - Make-It-Animatable: An Efficient Framework for Authoring Animation-Ready 3D Characters [86.13319549186959]
We present Make-It-Animatable, a novel data-driven method to make any 3D humanoid model ready for character animation in less than one second.
Our framework generates high-quality blend weights, bones, and pose transformations.
Compared to existing methods, our approach demonstrates significant improvements in both quality and speed.
arXiv Detail & Related papers (2024-11-27T10:18:06Z) - SkelFormer: Markerless 3D Pose and Shape Estimation using Skeletal Transformers [57.46911575980854]
We introduce SkelFormer, a novel markerless motion capture pipeline for multi-view human pose and shape estimation.
Our method first uses off-the-shelf 2D keypoint estimators, pre-trained on large-scale in-the-wild data, to obtain 3D joint positions.
Next, we design a regression-based inverse-kinematic skeletal transformer that maps the joint positions to pose and shape representations from heavily noisy observations.
arXiv Detail & Related papers (2024-04-19T04:51:18Z) - Learning Multi-Object Dynamics with Compositional Neural Radiance Fields [63.424469458529906]
We present a method to learn compositional predictive models from image observations based on implicit object encoders, Neural Radiance Fields (NeRFs), and graph neural networks.
NeRFs have become a popular choice for representing scenes due to their strong 3D prior.
For planning, we utilize RRTs in the learned latent space, where we can exploit our model and the implicit object encoder to make sampling the latent space informative and more efficient.
arXiv Detail & Related papers (2022-02-24T01:31:29Z) - Learning Skeletal Articulations with Neural Blend Shapes [57.879030623284216]
We develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure.
Our framework learns to rig and skin characters with the same articulation structure.
We propose neural blend shapes which improve the deformation quality in the joint regions.
arXiv Detail & Related papers (2021-05-06T05:58:13Z) - A-NeRF: Surface-free Human 3D Pose Refinement via Neural Rendering [13.219688351773422]
We propose a test-time optimization approach for monocular motion capture that learns a volumetric body model of the user in a self-supervised manner.
Our approach is self-supervised and does not require any additional ground truth labels for appearance, pose, or 3D shape.
We demonstrate that our novel combination of a discriminative pose estimation technique with surface-free analysis-by-synthesis outperforms purely discriminative monocular pose estimation approaches.
arXiv Detail & Related papers (2021-02-11T18:58:31Z) - TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape
and Garment Style [43.99803542307155]
We present TailorNet, a neural model which predicts clothing deformation in 3D as a function of three factors: pose, shape and style.
Our hypothesis is that (even non-linear) combinations of examples smooth out high frequency components such as fine-wrinkles.
Several experiments demonstrate TailorNet produces more realistic results than prior work, and even generates temporally coherent deformations.
arXiv Detail & Related papers (2020-03-10T08:49:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.