Real-Time Style Modelling of Human Locomotion via Feature-Wise
Transformations and Local Motion Phases
- URL: http://arxiv.org/abs/2201.04439v1
- Date: Wed, 12 Jan 2022 12:25:57 GMT
- Title: Real-Time Style Modelling of Human Locomotion via Feature-Wise
Transformations and Local Motion Phases
- Authors: Ian Mason, Sebastian Starke, Taku Komura
- Abstract summary: We present a style modelling system that uses an animation synthesis network to model motion content based on local motion phases.
An additional style modulation network uses feature-wise transformations to modulate style in real-time.
In comparison to other methods for real-time style modelling, we show our system is more robust and efficient in its style representation while improving motion quality.
- Score: 13.034241298005044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Controlling the manner in which a character moves in a real-time animation
system is a challenging task with useful applications. Existing style transfer
systems require access to a reference content motion clip, however, in
real-time systems the future motion content is unknown and liable to change
with user input. In this work we present a style modelling system that uses an
animation synthesis network to model motion content based on local motion
phases. An additional style modulation network uses feature-wise
transformations to modulate style in real-time. To evaluate our method, we
create and release a new style modelling dataset, 100STYLE, containing over 4
million frames of stylised locomotion data in 100 different styles that present
a number of challenges for existing systems. To model these styles, we extend
the local phase calculation with a contact-free formulation. In comparison to
other methods for real-time style modelling, we show our system is more robust
and efficient in its style representation while improving motion quality.
Related papers
- SMooDi: Stylized Motion Diffusion Model [46.293854851116215]
We introduce a novel Stylized Motion Diffusion model, dubbed SMooDi, to generate stylized motion driven by content texts and style sequences.
Our proposed framework outperforms existing methods in stylized motion generation.
arXiv Detail & Related papers (2024-07-17T17:59:42Z) - Taming Diffusion Probabilistic Models for Character Control [46.52584236101806]
We present a novel character control framework that responds in real-time to a variety of user-supplied control signals.
At the heart of our method lies a transformer-based Conditional Autoregressive Motion Diffusion Model.
Our work represents the first model that enables real-time generation of high-quality, diverse character animations.
arXiv Detail & Related papers (2024-04-23T15:20:17Z) - MotionCrafter: One-Shot Motion Customization of Diffusion Models [66.44642854791807]
We introduce MotionCrafter, a one-shot instance-guided motion customization method.
MotionCrafter employs a parallel spatial-temporal architecture that injects the reference motion into the temporal component of the base model.
During training, a frozen base model provides appearance normalization, effectively separating appearance from motion.
arXiv Detail & Related papers (2023-12-08T16:31:04Z) - Customizing Motion in Text-to-Video Diffusion Models [79.4121510826141]
We introduce an approach for augmenting text-to-video generation models with customized motions.
By leveraging a few video samples demonstrating specific movements as input, our method learns and generalizes the input motion patterns for diverse, text-specified scenarios.
arXiv Detail & Related papers (2023-12-07T18:59:03Z) - AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models
without Specific Tuning [92.33690050667475]
AnimateDiff is a framework for animating personalized T2I models without requiring model-specific tuning.
We propose MotionLoRA, a lightweight fine-tuning technique for AnimateDiff that enables a pre-trained motion module to adapt to new motion patterns.
Results show that our approaches help these models generate temporally smooth animation clips while preserving the visual quality and motion diversity.
arXiv Detail & Related papers (2023-07-10T17:34:16Z) - RSMT: Real-time Stylized Motion Transition for Characters [15.856276818061891]
We propose a Real-time Stylized Motion Transition method (RSMT) to achieve all aforementioned goals.
Our method consists of two critical, independent components: a general motion manifold model and a style motion sampler.
Our method proves to be fast, high-quality, versatile, and controllable.
arXiv Detail & Related papers (2023-06-21T01:50:04Z) - Online Motion Style Transfer for Interactive Character Control [5.6151459129070505]
We propose an end-to-end neural network that can generate motions with different styles and transfer motion styles in real-time under user control.
Our approach eliminates the use of handcrafted phase features, and could be easily trained and directly deployed in game systems.
arXiv Detail & Related papers (2022-03-30T15:23:37Z) - Style-ERD: Responsive and Coherent Online Motion Style Transfer [13.15016322155052]
Style transfer is a common method for enriching character animation.
We propose a novel style transfer model, Style-ERD, to stylize motions in an online manner.
Our method stylizes motions into multiple target styles with a unified model.
arXiv Detail & Related papers (2022-03-04T21:12:09Z) - Dance In the Wild: Monocular Human Animation with Neural Dynamic
Appearance Synthesis [56.550999933048075]
We propose a video based synthesis method that tackles challenges and demonstrates high quality results for in-the-wild videos.
We introduce a novel motion signature that is used to modulate the generator weights to capture dynamic appearance changes.
We evaluate our method on a set of challenging videos and show that our approach achieves state-of-the art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-11-10T20:18:57Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.