RSMT: Real-time Stylized Motion Transition for Characters
- URL: http://arxiv.org/abs/2306.11970v1
- Date: Wed, 21 Jun 2023 01:50:04 GMT
- Title: RSMT: Real-time Stylized Motion Transition for Characters
- Authors: Xiangjun Tang, Linjun Wu, He Wang, Bo Hu, Xu Gong, Yuchen Liao,
Songnan Li, Qilong Kou, Xiaogang Jin
- Abstract summary: We propose a Real-time Stylized Motion Transition method (RSMT) to achieve all aforementioned goals.
Our method consists of two critical, independent components: a general motion manifold model and a style motion sampler.
Our method proves to be fast, high-quality, versatile, and controllable.
- Score: 15.856276818061891
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Styled online in-between motion generation has important application
scenarios in computer animation and games. Its core challenge lies in the need
to satisfy four critical requirements simultaneously: generation speed, motion
quality, style diversity, and synthesis controllability. While the first two
challenges demand a delicate balance between simple fast models and learning
capacity for generation quality, the latter two are rarely investigated
together in existing methods, which largely focus on either control without
style or uncontrolled stylized motions. To this end, we propose a Real-time
Stylized Motion Transition method (RSMT) to achieve all aforementioned goals.
Our method consists of two critical, independent components: a general motion
manifold model and a style motion sampler. The former acts as a high-quality
motion source and the latter synthesizes styled motions on the fly under
control signals. Since both components can be trained separately on different
datasets, our method provides great flexibility, requires less data, and
generalizes well when no/few samples are available for unseen styles. Through
exhaustive evaluation, our method proves to be fast, high-quality, versatile,
and controllable. The code and data are available at
{https://github.com/yuyujunjun/RSMT-Realtime-Stylized-Motion-Transition.}
Related papers
- Taming Diffusion Probabilistic Models for Character Control [46.52584236101806]
We present a novel character control framework that responds in real-time to a variety of user-supplied control signals.
At the heart of our method lies a transformer-based Conditional Autoregressive Motion Diffusion Model.
Our work represents the first model that enables real-time generation of high-quality, diverse character animations.
arXiv Detail & Related papers (2024-04-23T15:20:17Z) - MotionMix: Weakly-Supervised Diffusion for Controllable Motion
Generation [19.999239668765885]
MotionMix is a weakly-supervised diffusion model that leverages both noisy and unannotated motion sequences.
Our framework consistently achieves state-of-the-art performances on text-to-motion, action-to-motion, and music-to-dance tasks.
arXiv Detail & Related papers (2024-01-20T04:58:06Z) - MotionCrafter: One-Shot Motion Customization of Diffusion Models [66.44642854791807]
We introduce MotionCrafter, a one-shot instance-guided motion customization method.
MotionCrafter employs a parallel spatial-temporal architecture that injects the reference motion into the temporal component of the base model.
During training, a frozen base model provides appearance normalization, effectively separating appearance from motion.
arXiv Detail & Related papers (2023-12-08T16:31:04Z) - TapMo: Shape-aware Motion Generation of Skeleton-free Characters [64.83230289993145]
We present TapMo, a Text-driven Animation Pipeline for Motion in a broad spectrum of skeleton-free 3D characters.
TapMo comprises two main components - Mesh Handle Predictor and Shape-aware Diffusion Module.
arXiv Detail & Related papers (2023-10-19T12:14:32Z) - MoStGAN-V: Video Generation with Temporal Motion Styles [28.082294960744726]
Previous works attempt to generate videos in arbitrary lengths either in an autoregressive manner or regarding time as a continuous signal.
We argue that a single time-agnostic latent vector of style-based generator is insufficient to model various and temporally-consistent motions.
We introduce additional time-dependent motion styles to model diverse motion patterns.
arXiv Detail & Related papers (2023-04-05T22:47:12Z) - MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model [35.32967411186489]
MotionDiffuse is a diffusion model-based text-driven motion generation framework.
It excels at modeling complicated data distribution and generating vivid motion sequences.
It responds to fine-grained instructions on body parts, and arbitrary-length motion synthesis with time-varied text prompts.
arXiv Detail & Related papers (2022-08-31T17:58:54Z) - MoDi: Unconditional Motion Synthesis from Diverse Data [51.676055380546494]
We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
arXiv Detail & Related papers (2022-06-16T09:06:25Z) - Real-time Controllable Motion Transition for Characters [14.88407656218885]
Real-time in-between motion generation is universally required in games and highly desirable in existing animation pipelines.
Our approach consists of two key components: motion manifold and conditional transitioning.
We show that our method is able to generate high-quality motions measured under multiple metrics.
arXiv Detail & Related papers (2022-05-05T10:02:54Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z) - UniCon: Universal Neural Controller For Physics-based Character Motion [70.45421551688332]
We propose a physics-based universal neural controller (UniCon) that learns to master thousands of motions with different styles by learning on large-scale motion datasets.
UniCon can support keyboard-driven control, compose motion sequences drawn from a large pool of locomotion and acrobatics skills and teleport a person captured on video to a physics-based virtual avatar.
arXiv Detail & Related papers (2020-11-30T18:51:16Z) - Hierarchical Style-based Networks for Motion Synthesis [150.226137503563]
We propose a self-supervised method for generating long-range, diverse and plausible behaviors to achieve a specific goal location.
Our proposed method learns to model the motion of human by decomposing a long-range generation task in a hierarchical manner.
On large-scale skeleton dataset, we show that the proposed method is able to synthesise long-range, diverse and plausible motion.
arXiv Detail & Related papers (2020-08-24T02:11:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.