Skeleton-free Pose Transfer for Stylized 3D Characters
- URL: http://arxiv.org/abs/2208.00790v1
- Date: Thu, 28 Jul 2022 20:05:57 GMT
- Title: Skeleton-free Pose Transfer for Stylized 3D Characters
- Authors: Zhouyingcheng Liao, Jimei Yang, Jun Saito, Gerard Pons-Moll, Yang Zhou
- Abstract summary: We present the first method that automatically transfers poses between stylized 3D characters without skeletal rigging.
We propose a novel pose transfer network that predicts the character skinning weights and deformation transformations jointly to articulate the target character to match the desired pose.
Our method is trained in a semi-supervised manner absorbing all existing character data with paired/unpaired poses and stylized shapes.
- Score: 53.33996932633865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the first method that automatically transfers poses between
stylized 3D characters without skeletal rigging. In contrast to previous
attempts to learn pose transformations on fixed or topology-equivalent skeleton
templates, our method focuses on a novel scenario to handle skeleton-free
characters with diverse shapes, topologies, and mesh connectivities. The key
idea of our method is to represent the characters in a unified articulation
model so that the pose can be transferred through the correspondent parts. To
achieve this, we propose a novel pose transfer network that predicts the
character skinning weights and deformation transformations jointly to
articulate the target character to match the desired pose. Our method is
trained in a semi-supervised manner absorbing all existing character data with
paired/unpaired poses and stylized shapes. It generalizes well to unseen
stylized characters and inanimate objects. We conduct extensive experiments and
demonstrate the effectiveness of our method on this novel task.
Related papers
- VINECS: Video-based Neural Character Skinning [82.39776643541383]
We propose a fully automated approach for creating a fully rigged character with pose-dependent skinning weights.
We show that our approach outperforms state-of-the-art while not relying on dense 4D scans.
arXiv Detail & Related papers (2023-07-03T08:35:53Z) - Zero-shot Pose Transfer for Unrigged Stylized 3D Characters [87.39039511208092]
We present a zero-shot approach that requires only the widely available deformed non-stylized avatars in training.
We leverage the power of local deformation, but without requiring explicit correspondence labels.
Our model generalizes to categories with scarce annotation, such as stylized quadrupeds.
arXiv Detail & Related papers (2023-05-31T21:39:02Z) - Neural Human Deformation Transfer [26.60034186410921]
We consider the problem of human deformation transfer, where the goal is to retarget poses between different characters.
We take a different approach and transform the identity of a character into a new identity without modifying the character's pose.
We show experimentally that our method outperforms state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2021-09-03T15:51:30Z) - 3D Human Shape Style Transfer [21.73251261476412]
We consider the problem of modifying/replacing the shape style of a real moving character with those of an arbitrary static real source character.
Traditional solutions follow a pose transfer strategy, from the moving character to the source character shape, that relies on skeletal pose parametrization.
In this paper, we explore an alternative approach that transfers the source shape style onto the moving character.
arXiv Detail & Related papers (2021-09-03T15:51:30Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z) - Single-Shot Freestyle Dance Reenactment [89.91619150027265]
The task of motion transfer between a source dancer and a target person is a special case of the pose transfer problem.
We propose a novel method that can reanimate a single image by arbitrary video sequences, unseen during training.
arXiv Detail & Related papers (2020-12-02T12:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.