Multi-Scale Control Signal-Aware Transformer for Motion Synthesis
without Phase
- URL: http://arxiv.org/abs/2303.01685v1
- Date: Fri, 3 Mar 2023 02:56:44 GMT
- Title: Multi-Scale Control Signal-Aware Transformer for Motion Synthesis
without Phase
- Authors: Lintao Wang, Kun Hu, Lei Bai, Yu Ding, Wanli Ouyang, Zhiyong Wang
- Abstract summary: We propose a task-agnostic deep learning method, namely Multi-scale Control Signal-aware Transformer (MCS-T)
MCS-T is able to successfully generate motions comparable to those generated by the methods using auxiliary information.
- Score: 72.01862340497314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Synthesizing controllable motion for a character using deep learning has been
a promising approach due to its potential to learn a compact model without
laborious feature engineering. To produce dynamic motion from weak control
signals such as desired paths, existing methods often require auxiliary
information such as phases for alleviating motion ambiguity, which limits their
generalisation capability. As past poses often contain useful auxiliary hints,
in this paper, we propose a task-agnostic deep learning method, namely
Multi-scale Control Signal-aware Transformer (MCS-T), with an attention based
encoder-decoder architecture to discover the auxiliary information implicitly
for synthesizing controllable motion without explicitly requiring auxiliary
information such as phase. Specifically, an encoder is devised to adaptively
formulate the motion patterns of a character's past poses with multi-scale
skeletons, and a decoder driven by control signals to further synthesize and
predict the character's state by paying context-specialised attention to the
encoded past motion patterns. As a result, it helps alleviate the issues of low
responsiveness and slow transition which often happen in conventional methods
not using auxiliary information. Both qualitative and quantitative experimental
results on an existing biped locomotion dataset, which involves diverse types
of motion transitions, demonstrate the effectiveness of our method. In
particular, MCS-T is able to successfully generate motions comparable to those
generated by the methods using auxiliary information.
Related papers
- TLControl: Trajectory and Language Control for Human Motion Synthesis [68.09806223962323]
We present TLControl, a novel method for realistic human motion synthesis.
It incorporates both low-level Trajectory and high-level Language semantics controls.
It is practical for interactive and high-quality animation generation.
arXiv Detail & Related papers (2023-11-28T18:54:16Z) - MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete
Representations [25.630268570049708]
MoConVQ is a novel unified framework for physics-based motion control leveraging scalable discrete representations.
Our approach effectively learns motion embeddings from a large, unstructured dataset spanning tens of hours of motion examples.
arXiv Detail & Related papers (2023-10-16T09:09:02Z) - CALM: Conditional Adversarial Latent Models for Directable Virtual
Characters [71.66218592749448]
We present Conditional Adversarial Latent Models (CALM), an approach for generating diverse and directable behaviors for user-controlled interactive virtual characters.
Using imitation learning, CALM learns a representation of movement that captures the complexity of human motion, and enables direct control over character movements.
arXiv Detail & Related papers (2023-05-02T09:01:44Z) - Controllable Motion Synthesis and Reconstruction with Autoregressive
Diffusion Models [18.50942770933098]
MoDiff is an autoregressive probabilistic diffusion model over motion sequences conditioned on control contexts of other modalities.
Our model integrates a cross-modal Transformer encoder and a Transformer-based decoder, which are found effective in capturing temporal correlations in motion and control modalities.
arXiv Detail & Related papers (2023-04-03T08:17:08Z) - Transition Motion Tensor: A Data-Driven Approach for Versatile and
Controllable Agents in Physically Simulated Environments [6.8438089867929905]
This paper proposes a data-driven framework that creates novel and physically accurate transitions outside of the motion dataset.
It enables simulated characters to adopt new motion skills efficiently and robustly without modifying existing ones.
arXiv Detail & Related papers (2021-11-30T02:17:25Z) - Unsupervised Motion Representation Learning with Capsule Autoencoders [54.81628825371412]
Motion Capsule Autoencoder (MCAE) models motion in a two-level hierarchy.
MCAE is evaluated on a novel Trajectory20 motion dataset and various real-world skeleton-based human action datasets.
arXiv Detail & Related papers (2021-10-01T16:52:03Z) - CCVS: Context-aware Controllable Video Synthesis [95.22008742695772]
presentation introduces a self-supervised learning approach to the synthesis of new video clips from old ones.
It conditions the synthesis process on contextual information for temporal continuity and ancillary information for fine control.
arXiv Detail & Related papers (2021-07-16T17:57:44Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z) - ChallenCap: Monocular 3D Capture of Challenging Human Performances using
Multi-Modal References [18.327101908143113]
We propose ChallenCap -- a template-based approach to capture challenging 3D human motions using a single RGB camera.
We adopt a novel learning-and-optimization framework, with the aid of multi-modal references.
Experiments on our new challenging motion dataset demonstrate the effectiveness and robustness of our approach to capture challenging human motions.
arXiv Detail & Related papers (2021-03-11T15:49:22Z) - Recognition and Synthesis of Object Transport Motion [0.0]
This project illustrates how deep convolutional networks can be used, alongside specialized data augmentation techniques, on a small motion capture dataset.
The project shows how these same augmentation techniques can be scaled up for use in the more complex task of motion synthesis.
By exploring recent developments in the concept of Generative Adversarial Models (GANs), specifically the Wasserstein GAN, this project outlines a model that is able to successfully generate lifelike object transportation motions.
arXiv Detail & Related papers (2020-09-27T22:13:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.