Style-ERD: Responsive and Coherent Online Motion Style Transfer
- URL: http://arxiv.org/abs/2203.02574v1
- Date: Fri, 4 Mar 2022 21:12:09 GMT
- Title: Style-ERD: Responsive and Coherent Online Motion Style Transfer
- Authors: Tianxin Tao, Xiaohang Zhan, Zhongquan Chen, Michiel van de Panne
- Abstract summary: Style transfer is a common method for enriching character animation.
We propose a novel style transfer model, Style-ERD, to stylize motions in an online manner.
Our method stylizes motions into multiple target styles with a unified model.
- Score: 13.15016322155052
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motion style transfer is a common method for enriching character animation.
Motion style transfer algorithms are often designed for offline settings where
motions are processed in segments. However, for online animation applications,
such as realtime avatar animation from motion capture, motions need to be
processed as a stream with minimal latency. In this work, we realize a
flexible, high-quality motion style transfer method for this setting. We
propose a novel style transfer model, Style-ERD, to stylize motions in an
online manner with an Encoder-Recurrent-Decoder structure, along with a novel
discriminator that combines feature attention and temporal attention. Our
method stylizes motions into multiple target styles with a unified model.
Although our method targets online settings, it outperforms previous offline
methods in motion realism and style expressiveness and provides significant
gains in runtime efficiency
Related papers
- Decoupling Contact for Fine-Grained Motion Style Transfer [21.61658765014968]
Motion style transfer changes the style of a motion while retaining its content and is useful in computer animations and games.
It is unknown how to decouple and control contact to achieve fine-grained control in motion style transfer.
We present a novel style transfer method for fine-grained control over contacts while achieving both motion naturalness and spatial-temporal variations of style.
arXiv Detail & Related papers (2024-09-09T07:33:14Z) - SMooDi: Stylized Motion Diffusion Model [46.293854851116215]
We introduce a novel Stylized Motion Diffusion model, dubbed SMooDi, to generate stylized motion driven by content texts and style sequences.
Our proposed framework outperforms existing methods in stylized motion generation.
arXiv Detail & Related papers (2024-07-17T17:59:42Z) - MoST: Motion Style Transformer between Diverse Action Contents [23.62426940733713]
We propose a novel motion style transformer that effectively disentangles style from content and generates a plausible motion with transferred style from a source motion.
Our method outperforms existing methods and demonstrates exceptionally high quality, particularly in motion pairs with different contents, without the need for post-processing.
arXiv Detail & Related papers (2024-03-10T14:11:25Z) - MotionCrafter: One-Shot Motion Customization of Diffusion Models [66.44642854791807]
We introduce MotionCrafter, a one-shot instance-guided motion customization method.
MotionCrafter employs a parallel spatial-temporal architecture that injects the reference motion into the temporal component of the base model.
During training, a frozen base model provides appearance normalization, effectively separating appearance from motion.
arXiv Detail & Related papers (2023-12-08T16:31:04Z) - VMC: Video Motion Customization using Temporal Attention Adaption for
Text-to-Video Diffusion Models [58.93124686141781]
Video Motion Customization (VMC) is a novel one-shot tuning approach crafted to adapt temporal attention layers within video diffusion models.
Our approach introduces a novel motion distillation objective using residual vectors between consecutive frames as a motion reference.
We validate our method against state-of-the-art video generative models across diverse real-world motions and contexts.
arXiv Detail & Related papers (2023-12-01T06:50:11Z) - Online Motion Style Transfer for Interactive Character Control [5.6151459129070505]
We propose an end-to-end neural network that can generate motions with different styles and transfer motion styles in real-time under user control.
Our approach eliminates the use of handcrafted phase features, and could be easily trained and directly deployed in game systems.
arXiv Detail & Related papers (2022-03-30T15:23:37Z) - Real-Time Style Modelling of Human Locomotion via Feature-Wise
Transformations and Local Motion Phases [13.034241298005044]
We present a style modelling system that uses an animation synthesis network to model motion content based on local motion phases.
An additional style modulation network uses feature-wise transformations to modulate style in real-time.
In comparison to other methods for real-time style modelling, we show our system is more robust and efficient in its style representation while improving motion quality.
arXiv Detail & Related papers (2022-01-12T12:25:57Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z) - Animating Pictures with Eulerian Motion Fields [90.30598913855216]
We show a fully automatic method for converting a still image into a realistic animated looping video.
We target scenes with continuous fluid motion, such as flowing water and billowing smoke.
We propose a novel video looping technique that flows features both forward and backward in time and then blends the results.
arXiv Detail & Related papers (2020-11-30T18:59:06Z) - Unpaired Motion Style Transfer from Video to Animation [74.15550388701833]
Transferring the motion style from one animation clip to another, while preserving the motion content of the latter, has been a long-standing problem in character animation.
We present a novel data-driven framework for motion style transfer, which learns from an unpaired collection of motions with style labels.
Our framework is able to extract motion styles directly from videos, bypassing 3D reconstruction, and apply them to the 3D input motion.
arXiv Detail & Related papers (2020-05-12T13:21:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.