ASMR: Adaptive Skeleton-Mesh Rigging and Skinning via 2D Generative Prior
- URL: http://arxiv.org/abs/2503.13579v1
- Date: Mon, 17 Mar 2025 15:59:02 GMT
- Title: ASMR: Adaptive Skeleton-Mesh Rigging and Skinning via 2D Generative Prior
- Authors: Seokhyeon Hong, Soojin Choi, Chaelin Kim, Sihun Cha, Junyong Noh,
- Abstract summary: We present a novel method for the automatic rigging and skinning of character meshes using skeletal motion data.<n>The proposed method predicts the optimal skeleton aligned with the size and proportion of the mesh as well as defines skinning weights for various mesh-skeleton configurations.
- Score: 5.429282997550316
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Despite the growing accessibility of skeletal motion data, integrating it for animating character meshes remains challenging due to diverse configurations of both skeletons and meshes. Specifically, the body scale and bone lengths of the skeleton should be adjusted in accordance with the size and proportions of the mesh, ensuring that all joints are accurately positioned within the character mesh. Furthermore, defining skinning weights is complicated by variations in skeletal configurations, such as the number of joints and their hierarchy, as well as differences in mesh configurations, including their connectivity and shapes. While existing approaches have made efforts to automate this process, they hardly address the variations in both skeletal and mesh configurations. In this paper, we present a novel method for the automatic rigging and skinning of character meshes using skeletal motion data, accommodating arbitrary configurations of both meshes and skeletons. The proposed method predicts the optimal skeleton aligned with the size and proportion of the mesh as well as defines skinning weights for various mesh-skeleton configurations, without requiring explicit supervision tailored to each of them. By incorporating Diffusion 3D Features (Diff3F) as semantic descriptors of character meshes, our method achieves robust generalization across different configurations. To assess the performance of our method in comparison to existing approaches, we conducted comprehensive evaluations encompassing both quantitative and qualitative analyses, specifically examining the predicted skeletons, skinning weights, and deformation quality.
Related papers
- Mesh Mamba: A Unified State Space Model for Saliency Prediction in Non-Textured and Textured Meshes [50.23625950905638]
Mesh saliency enhances the adaptability of 3D vision by identifying and emphasizing regions that naturally attract visual attention.
We introduce mesh Mamba, a unified saliency prediction model based on a state space model (SSM)
Mesh Mamba effectively analyzes the geometric structure of the mesh while seamlessly incorporating texture features into the topological framework.
arXiv Detail & Related papers (2025-04-02T08:22:25Z) - ARMO: Autoregressive Rigging for Multi-Category Objects [8.030479370619458]
We introduce OmniRig, the first large-scale rigging dataset, comprising 79,499 meshes with detailed skeleton and skinning information.
Unlike traditional benchmarks that rely on predefined standard poses, our dataset embraces diverse shape categories, styles, and poses.
We propose ARMO, a novel rigging framework that utilizes an autoregressive model to predict both joint positions and connectivity relationships in a unified manner.
arXiv Detail & Related papers (2025-03-26T15:56:48Z) - RigAnything: Template-Free Autoregressive Rigging for Diverse 3D Assets [47.81216915952291]
We present RigAnything, a novel autoregressive transformer-based model.<n>It makes 3D assets rig-ready by probabilistically generating joints, skeleton topologies, and assigning skinning weights in a template-free manner.<n>RigAnything demonstrates state-of-the-art performance across diverse object types, including humanoids, quadrupeds, marine creatures, insects, and many more.
arXiv Detail & Related papers (2025-02-13T18:59:13Z) - Motif Guided Graph Transformer with Combinatorial Skeleton Prototype Learning for Skeleton-Based Person Re-Identification [60.939250172443586]
Person re-identification (re-ID) via 3D skeleton data is a challenging task with significant value in many scenarios.<n>Existing skeleton-based methods typically assume virtual motion relations between all joints, and adopt average joint or sequence representations for learning.<n>This paper presents a generic Motif guided graph transformer with Combinatorial skeleton prototype learning (MoCos)<n>MoCos exploits structure-specific and gait-related body relations as well as features of skeleton graphs to learn effective skeleton representations for person re-ID.
arXiv Detail & Related papers (2024-12-12T08:13:29Z) - ToMiE: Towards Modular Growth in Enhanced SMPL Skeleton for 3D Human with Animatable Garments [41.23897822168498]
We propose a modular growth strategy that enables the joint tree of the skeleton to expand adaptively.
Specifically, our method, called ToMiE, consists of parent joints localization and external joints optimization.
ToMiE manages to outperform other methods across various cases with garments, not only in rendering quality but also by offering free animation of grown joints.
arXiv Detail & Related papers (2024-10-10T16:25:52Z) - SkeletonMAE: Graph-based Masked Autoencoder for Skeleton Sequence
Pre-training [110.55093254677638]
We propose an efficient skeleton sequence learning framework, named Skeleton Sequence Learning (SSL)
In this paper, we build an asymmetric graph-based encoder-decoder pre-training architecture named SkeletonMAE.
Our SSL generalizes well across different datasets and outperforms the state-of-the-art self-supervised skeleton-based action recognition methods.
arXiv Detail & Related papers (2023-07-17T13:33:11Z) - Learning Skeletal Articulations with Neural Blend Shapes [57.879030623284216]
We develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure.
Our framework learns to rig and skin characters with the same articulation structure.
We propose neural blend shapes which improve the deformation quality in the joint regions.
arXiv Detail & Related papers (2021-05-06T05:58:13Z) - SkeletonNet: A Topology-Preserving Solution for Learning Mesh
Reconstruction of Object Surfaces from RGB Images [85.66560542483286]
This paper focuses on the challenging task of learning 3D object surface reconstructions from RGB images.
We propose two models, the Skeleton-Based GraphConvolutional Neural Network (SkeGCNN) and the Skeleton-Regularized Deep Implicit Surface Network (SkeDISN)
We conduct thorough experiments that verify the efficacy of our proposed SkeletonNet.
arXiv Detail & Related papers (2020-08-13T07:59:25Z) - What and Where: Modeling Skeletons from Semantic and Spatial
Perspectives for Action Recognition [46.836815779215456]
We propose to model skeletons from a novel spatial perspective, from which the model takes the spatial location as prior knowledge to group human joints.
From the semantic perspective, we propose a Transformer-like network that is expert in modeling joint correlations.
From the spatial perspective, we transform the skeleton data into the sparse format for efficient feature extraction.
arXiv Detail & Related papers (2020-04-07T10:53:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.