Online Motion Style Transfer for Interactive Character Control
- URL: http://arxiv.org/abs/2203.16393v1
- Date: Wed, 30 Mar 2022 15:23:37 GMT
- Title: Online Motion Style Transfer for Interactive Character Control
- Authors: Yingtian Tang, Jiangtao Liu, Cheng Zhou, Tingguang Li
- Abstract summary: We propose an end-to-end neural network that can generate motions with different styles and transfer motion styles in real-time under user control.
Our approach eliminates the use of handcrafted phase features, and could be easily trained and directly deployed in game systems.
- Score: 5.6151459129070505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motion style transfer is highly desired for motion generation systems for
gaming. Compared to its offline counterpart, the research on online motion
style transfer under interactive control is limited. In this work, we propose
an end-to-end neural network that can generate motions with different styles
and transfer motion styles in real-time under user control. Our approach
eliminates the use of handcrafted phase features, and could be easily trained
and directly deployed in game systems. In the experiment part, we evaluate our
approach from three aspects that are essential for industrial game design:
accuracy, flexibility, and variety, and our model performs a satisfying result.
Related papers
- Sitcom-Crafter: A Plot-Driven Human Motion Generation System in 3D Scenes [83.55301458112672]
Sitcom-Crafter is a system for human motion generation in 3D space.
Central to the function generation modules is our novel 3D scene-aware human-human interaction module.
Augmentation modules encompass plot comprehension for command generation, motion synchronization for seamless integration of different motion types.
arXiv Detail & Related papers (2024-10-14T17:56:19Z) - Decoupling Contact for Fine-Grained Motion Style Transfer [21.61658765014968]
Motion style transfer changes the style of a motion while retaining its content and is useful in computer animations and games.
It is unknown how to decouple and control contact to achieve fine-grained control in motion style transfer.
We present a novel style transfer method for fine-grained control over contacts while achieving both motion naturalness and spatial-temporal variations of style.
arXiv Detail & Related papers (2024-09-09T07:33:14Z) - MotionCrafter: One-Shot Motion Customization of Diffusion Models [66.44642854791807]
We introduce MotionCrafter, a one-shot instance-guided motion customization method.
MotionCrafter employs a parallel spatial-temporal architecture that injects the reference motion into the temporal component of the base model.
During training, a frozen base model provides appearance normalization, effectively separating appearance from motion.
arXiv Detail & Related papers (2023-12-08T16:31:04Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - CALM: Conditional Adversarial Latent Models for Directable Virtual
Characters [71.66218592749448]
We present Conditional Adversarial Latent Models (CALM), an approach for generating diverse and directable behaviors for user-controlled interactive virtual characters.
Using imitation learning, CALM learns a representation of movement that captures the complexity of human motion, and enables direct control over character movements.
arXiv Detail & Related papers (2023-05-02T09:01:44Z) - Style-ERD: Responsive and Coherent Online Motion Style Transfer [13.15016322155052]
Style transfer is a common method for enriching character animation.
We propose a novel style transfer model, Style-ERD, to stylize motions in an online manner.
Our method stylizes motions into multiple target styles with a unified model.
arXiv Detail & Related papers (2022-03-04T21:12:09Z) - Motion Puzzle: Arbitrary Motion Style Transfer by Body Part [6.206196935093063]
Motion Puzzle is a novel motion style transfer network that advances the state-of-the-art in several important respects.
Our framework extracts style features from multiple style motions for different body parts and transfers them locally to the target body parts.
It can capture styles exhibited by dynamic movements, such as flapping and staggering, significantly better than previous work.
arXiv Detail & Related papers (2022-02-10T19:56:46Z) - Real-Time Style Modelling of Human Locomotion via Feature-Wise
Transformations and Local Motion Phases [13.034241298005044]
We present a style modelling system that uses an animation synthesis network to model motion content based on local motion phases.
An additional style modulation network uses feature-wise transformations to modulate style in real-time.
In comparison to other methods for real-time style modelling, we show our system is more robust and efficient in its style representation while improving motion quality.
arXiv Detail & Related papers (2022-01-12T12:25:57Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z) - UniCon: Universal Neural Controller For Physics-based Character Motion [70.45421551688332]
We propose a physics-based universal neural controller (UniCon) that learns to master thousands of motions with different styles by learning on large-scale motion datasets.
UniCon can support keyboard-driven control, compose motion sequences drawn from a large pool of locomotion and acrobatics skills and teleport a person captured on video to a physics-based virtual avatar.
arXiv Detail & Related papers (2020-11-30T18:51:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.