Zero-shot Pose Transfer for Unrigged Stylized 3D Characters
- URL: http://arxiv.org/abs/2306.00200v1
- Date: Wed, 31 May 2023 21:39:02 GMT
- Title: Zero-shot Pose Transfer for Unrigged Stylized 3D Characters
- Authors: Jiashun Wang, Xueting Li, Sifei Liu, Shalini De Mello, Orazio Gallo,
Xiaolong Wang, Jan Kautz
- Abstract summary: We present a zero-shot approach that requires only the widely available deformed non-stylized avatars in training.
We leverage the power of local deformation, but without requiring explicit correspondence labels.
Our model generalizes to categories with scarce annotation, such as stylized quadrupeds.
- Score: 87.39039511208092
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transferring the pose of a reference avatar to stylized 3D characters of
various shapes is a fundamental task in computer graphics. Existing methods
either require the stylized characters to be rigged, or they use the stylized
character in the desired pose as ground truth at training. We present a
zero-shot approach that requires only the widely available deformed
non-stylized avatars in training, and deforms stylized characters of
significantly different shapes at inference. Classical methods achieve strong
generalization by deforming the mesh at the triangle level, but this requires
labelled correspondences. We leverage the power of local deformation, but
without requiring explicit correspondence labels. We introduce a
semi-supervised shape-understanding module to bypass the need for explicit
correspondences at test time, and an implicit pose deformation module that
deforms individual surface points to match the target pose. Furthermore, to
encourage realistic and accurate deformation of stylized characters, we
introduce an efficient volume-based test-time training procedure. Because it
does not need rigging, nor the deformed stylized character at training time,
our model generalizes to categories with scarce annotation, such as stylized
quadrupeds. Extensive experiments demonstrate the effectiveness of the proposed
method compared to the state-of-the-art approaches trained with comparable or
more supervision. Our project page is available at
https://jiashunwang.github.io/ZPT
Related papers
- Portrait Diffusion: Training-free Face Stylization with
Chain-of-Painting [64.43760427752532]
Face stylization refers to the transformation of a face into a specific portrait style.
Current methods require the use of example-based adaptation approaches to fine-tune pre-trained generative models.
This paper proposes a training-free face stylization framework, named Portrait Diffusion.
arXiv Detail & Related papers (2023-12-03T06:48:35Z) - Self-supervised Learning of Implicit Shape Representation with Dense
Correspondence for Deformable Objects [26.102490905989338]
We propose a novel self-supervised approach to learn neural implicit shape representation for deformable objects.
Our method does not require the priors of skeleton and skinning weight, and only requires a collection of shapes represented in signed distance fields.
Our model can represent shapes with large deformations and can support two typical applications, such as texture transfer and shape editing.
arXiv Detail & Related papers (2023-08-24T06:38:33Z) - Skeleton-free Pose Transfer for Stylized 3D Characters [53.33996932633865]
We present the first method that automatically transfers poses between stylized 3D characters without skeletal rigging.
We propose a novel pose transfer network that predicts the character skinning weights and deformation transformations jointly to articulate the target character to match the desired pose.
Our method is trained in a semi-supervised manner absorbing all existing character data with paired/unpaired poses and stylized shapes.
arXiv Detail & Related papers (2022-07-28T20:05:57Z) - 3D Human Shape Style Transfer [21.73251261476412]
We consider the problem of modifying/replacing the shape style of a real moving character with those of an arbitrary static real source character.
Traditional solutions follow a pose transfer strategy, from the moving character to the source character shape, that relies on skeletal pose parametrization.
In this paper, we explore an alternative approach that transfers the source shape style onto the moving character.
arXiv Detail & Related papers (2021-09-03T15:51:30Z) - Learning Skeletal Articulations with Neural Blend Shapes [57.879030623284216]
We develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure.
Our framework learns to rig and skin characters with the same articulation structure.
We propose neural blend shapes which improve the deformation quality in the joint regions.
arXiv Detail & Related papers (2021-05-06T05:58:13Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z) - Unsupervised Shape and Pose Disentanglement for 3D Meshes [49.431680543840706]
We present a simple yet effective approach to learn disentangled shape and pose representations in an unsupervised setting.
We use a combination of self-consistency and cross-consistency constraints to learn pose and shape space from registered meshes.
We demonstrate the usefulness of learned representations through a number of tasks including pose transfer and shape retrieval.
arXiv Detail & Related papers (2020-07-22T11:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.