VINECS: Video-based Neural Character Skinning
- URL: http://arxiv.org/abs/2307.00842v1
- Date: Mon, 3 Jul 2023 08:35:53 GMT
- Title: VINECS: Video-based Neural Character Skinning
- Authors: Zhouyingcheng Liao, Vladislav Golyanik, Marc Habermann, Christian
Theobalt
- Abstract summary: We propose a fully automated approach for creating a fully rigged character with pose-dependent skinning weights.
We show that our approach outperforms state-of-the-art while not relying on dense 4D scans.
- Score: 82.39776643541383
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rigging and skinning clothed human avatars is a challenging task and
traditionally requires a lot of manual work and expertise. Recent methods
addressing it either generalize across different characters or focus on
capturing the dynamics of a single character observed under different pose
configurations. However, the former methods typically predict solely static
skinning weights, which perform poorly for highly articulated poses, and the
latter ones either require dense 3D character scans in different poses or
cannot generate an explicit mesh with vertex correspondence over time. To
address these challenges, we propose a fully automated approach for creating a
fully rigged character with pose-dependent skinning weights, which can be
solely learned from multi-view video. Therefore, we first acquire a rigged
template, which is then statically skinned. Next, a coordinate-based MLP learns
a skinning weights field parameterized over the position in a canonical pose
space and the respective pose. Moreover, we introduce our pose- and
view-dependent appearance field allowing us to differentiably render and
supervise the posed mesh using multi-view imagery. We show that our approach
outperforms state-of-the-art while not relying on dense 4D scans.
Related papers
- VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation [79.99551055245071]
We propose VividPose, an end-to-end pipeline that ensures superior temporal stability.
An identity-aware appearance controller integrates additional facial information without compromising other appearance details.
A geometry-aware pose controller utilizes both dense rendering maps from SMPL-X and sparse skeleton maps.
VividPose exhibits superior generalization capabilities on our proposed in-the-wild dataset.
arXiv Detail & Related papers (2024-05-28T13:18:32Z) - PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar
Modeling [30.93155530590843]
We present PoseVocab, a novel pose encoding method that can encode high-fidelity human details.
Given multi-view RGB videos of a character, PoseVocab constructs key poses and latent embeddings based on the training poses.
Experiments show that our method outperforms other state-of-the-art baselines.
arXiv Detail & Related papers (2023-04-25T17:25:36Z) - Generalizable Neural Performer: Learning Robust Radiance Fields for
Human Novel View Synthesis [52.720314035084215]
This work targets at using a general deep learning framework to synthesize free-viewpoint images of arbitrary human performers.
We present a simple yet powerful framework, named Generalizable Neural Performer (GNR), that learns a generalizable and robust neural body representation.
Experiments on GeneBody-1.0 and ZJU-Mocap show better robustness of our methods than recent state-of-the-art generalizable methods.
arXiv Detail & Related papers (2022-04-25T17:14:22Z) - Neural Rendering of Humans in Novel View and Pose from Monocular Video [68.37767099240236]
We introduce a new method that generates photo-realistic humans under novel views and poses given a monocular video as input.
Our method significantly outperforms existing approaches under unseen poses and novel views given monocular videos as input.
arXiv Detail & Related papers (2022-04-04T03:09:20Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.