TapMo: Shape-aware Motion Generation of Skeleton-free Characters
- URL: http://arxiv.org/abs/2310.12678v1
- Date: Thu, 19 Oct 2023 12:14:32 GMT
- Title: TapMo: Shape-aware Motion Generation of Skeleton-free Characters
- Authors: Jiaxu Zhang, Shaoli Huang, Zhigang Tu, Xin Chen, Xiaohang Zhan, Gang
Yu, Ying Shan
- Abstract summary: We present TapMo, a Text-driven Animation Pipeline for Motion in a broad spectrum of skeleton-free 3D characters.
TapMo comprises two main components - Mesh Handle Predictor and Shape-aware Diffusion Module.
- Score: 64.83230289993145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous motion generation methods are limited to the pre-rigged 3D human
model, hindering their applications in the animation of various non-rigged
characters. In this work, we present TapMo, a Text-driven Animation Pipeline
for synthesizing Motion in a broad spectrum of skeleton-free 3D characters. The
pivotal innovation in TapMo is its use of shape deformation-aware features as a
condition to guide the diffusion model, thereby enabling the generation of
mesh-specific motions for various characters. Specifically, TapMo comprises two
main components - Mesh Handle Predictor and Shape-aware Diffusion Module. Mesh
Handle Predictor predicts the skinning weights and clusters mesh vertices into
adaptive handles for deformation control, which eliminates the need for
traditional skeletal rigging. Shape-aware Motion Diffusion synthesizes motion
with mesh-specific adaptations. This module employs text-guided motions and
mesh features extracted during the first stage, preserving the geometric
integrity of the animations by accounting for the character's shape and
deformation. Trained in a weakly-supervised manner, TapMo can accommodate a
multitude of non-human meshes, both with and without associated text motions.
We demonstrate the effectiveness and generalizability of TapMo through rigorous
qualitative and quantitative experiments. Our results reveal that TapMo
consistently outperforms existing auto-animation methods, delivering
superior-quality animations for both seen or unseen heterogeneous 3D
characters.
Related papers
- Towards High-Quality 3D Motion Transfer with Realistic Apparel Animation [69.36162784152584]
We present a novel method aiming for high-quality motion transfer with realistic apparel animation.
We propose a data-driven pipeline that learns to disentangle body and apparel deformations via two neural deformation modules.
Our method produces results with superior quality for various types of apparel.
arXiv Detail & Related papers (2024-07-15T22:17:35Z) - MotionCrafter: One-Shot Motion Customization of Diffusion Models [66.44642854791807]
We introduce MotionCrafter, a one-shot instance-guided motion customization method.
MotionCrafter employs a parallel spatial-temporal architecture that injects the reference motion into the temporal component of the base model.
During training, a frozen base model provides appearance normalization, effectively separating appearance from motion.
arXiv Detail & Related papers (2023-12-08T16:31:04Z) - AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models
without Specific Tuning [92.33690050667475]
AnimateDiff is a framework for animating personalized T2I models without requiring model-specific tuning.
We propose MotionLoRA, a lightweight fine-tuning technique for AnimateDiff that enables a pre-trained motion module to adapt to new motion patterns.
Results show that our approaches help these models generate temporally smooth animation clips while preserving the visual quality and motion diversity.
arXiv Detail & Related papers (2023-07-10T17:34:16Z) - FLAME: Free-form Language-based Motion Synthesis & Editing [17.70085940884357]
We propose a diffusion-based motion synthesis and editing model named FLAME.
FLAME can generate high-fidelity motions well aligned with the given text.
It can edit the parts of the motion, both frame-wise and joint-wise, without any fine-tuning.
arXiv Detail & Related papers (2022-09-01T10:34:57Z) - MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model [35.32967411186489]
MotionDiffuse is a diffusion model-based text-driven motion generation framework.
It excels at modeling complicated data distribution and generating vivid motion sequences.
It responds to fine-grained instructions on body parts, and arbitrary-length motion synthesis with time-varied text prompts.
arXiv Detail & Related papers (2022-08-31T17:58:54Z) - HuMoR: 3D Human Motion Model for Robust Pose Estimation [100.55369985297797]
HuMoR is a 3D Human Motion Model for Robust Estimation of temporal pose and shape.
We introduce a conditional variational autoencoder, which learns a distribution of the change in pose at each step of a motion sequence.
We demonstrate that our model generalizes to diverse motions and body shapes after training on a large motion capture dataset.
arXiv Detail & Related papers (2021-05-10T21:04:55Z) - Real-time Deep Dynamic Characters [95.5592405831368]
We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance.
We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing.
We show that our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches.
arXiv Detail & Related papers (2021-05-04T23:28:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.