Make-It-Poseable: Feed-forward Latent Posing Model for 3D Humanoid Character Animation
- URL: http://arxiv.org/abs/2512.16767v1
- Date: Thu, 18 Dec 2025 17:01:44 GMT
- Title: Make-It-Poseable: Feed-forward Latent Posing Model for 3D Humanoid Character Animation
- Authors: Zhiyang Guo, Ori Zhang, Jax Xiang, Alan Zhao, Wengang Zhou, Houqiang Li,
- Abstract summary: We introduce Make-It-Poseable, a novel feed-forward framework that reformulates character posing as a latent-space transformation problem.<n>Our method reconstructs the character in new poses by directly manipulating its latent representation.<n>It also naturally extends to 3D editing applications like part replacement and refinement.
- Score: 74.6792422278706
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Posing 3D characters is a fundamental task in computer graphics and vision. However, existing methods like auto-rigging and pose-conditioned generation often struggle with challenges such as inaccurate skinning weight prediction, topological imperfections, and poor pose conformance, limiting their robustness and generalizability. To overcome these limitations, we introduce Make-It-Poseable, a novel feed-forward framework that reformulates character posing as a latent-space transformation problem. Instead of deforming mesh vertices as in traditional pipelines, our method reconstructs the character in new poses by directly manipulating its latent representation. At the core of our method is a latent posing transformer that manipulates shape tokens based on skeletal motion. This process is facilitated by a dense pose representation for precise control. To ensure high-fidelity geometry and accommodate topological changes, we also introduce a latent-space supervision strategy and an adaptive completion module. Our method demonstrates superior performance in posing quality. It also naturally extends to 3D editing applications like part replacement and refinement.
Related papers
- PoseMaster: Generating 3D Characters in Arbitrary Poses from a Single Image [37.332231168919705]
We propose PoseMaster, an end-to-end controllable 3D character generation framework.<n>Specifically, we unify pose transformation and 3D character generation into a flow-based 3D native generation framework.<n>Considering the specificity of multi-condition control, we randomly empty the pose condition and the image condition during training to improve the effectiveness and generalizability of pose control.
arXiv Detail & Related papers (2025-06-26T08:03:14Z) - Make-It-Animatable: An Efficient Framework for Authoring Animation-Ready 3D Characters [86.13319549186959]
We present Make-It-Animatable, a novel data-driven method to make any 3D humanoid model ready for character animation in less than one second.<n>Our framework generates high-quality blend weights, bones, and pose transformations.<n>Compared to existing methods, our approach demonstrates significant improvements in both quality and speed.
arXiv Detail & Related papers (2024-11-27T10:18:06Z) - VINECS: Video-based Neural Character Skinning [82.39776643541383]
We propose a fully automated approach for creating a fully rigged character with pose-dependent skinning weights.
We show that our approach outperforms state-of-the-art while not relying on dense 4D scans.
arXiv Detail & Related papers (2023-07-03T08:35:53Z) - Zero-shot Pose Transfer for Unrigged Stylized 3D Characters [87.39039511208092]
We present a zero-shot approach that requires only the widely available deformed non-stylized avatars in training.
We leverage the power of local deformation, but without requiring explicit correspondence labels.
Our model generalizes to categories with scarce annotation, such as stylized quadrupeds.
arXiv Detail & Related papers (2023-05-31T21:39:02Z) - Skeleton-free Pose Transfer for Stylized 3D Characters [53.33996932633865]
We present the first method that automatically transfers poses between stylized 3D characters without skeletal rigging.
We propose a novel pose transfer network that predicts the character skinning weights and deformation transformations jointly to articulate the target character to match the desired pose.
Our method is trained in a semi-supervised manner absorbing all existing character data with paired/unpaired poses and stylized shapes.
arXiv Detail & Related papers (2022-07-28T20:05:57Z) - Pixel Sampling for Style Preserving Face Pose Editing [53.14006941396712]
We present a novel two-stage approach to solve the dilemma, where the task of face pose manipulation is cast into face inpainting.
By selectively sampling pixels from the input face and slightly adjust their relative locations, the face editing result faithfully keeps the identity information as well as the image style unchanged.
With the 3D facial landmarks as guidance, our method is able to manipulate face pose in three degrees of freedom, i.e., yaw, pitch, and roll, resulting in more flexible face pose editing.
arXiv Detail & Related papers (2021-06-14T11:29:29Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.