Real-time Controllable Motion Transition for Characters
- URL: http://arxiv.org/abs/2205.02540v1
- Date: Thu, 5 May 2022 10:02:54 GMT
- Title: Real-time Controllable Motion Transition for Characters
- Authors: Xiangjun Tang, He Wang, Bo Hu, Xu Gong, Ruifan Yi, Qilong Kou,
Xiaogang Jin
- Abstract summary: Real-time in-between motion generation is universally required in games and highly desirable in existing animation pipelines.
Our approach consists of two key components: motion manifold and conditional transitioning.
We show that our method is able to generate high-quality motions measured under multiple metrics.
- Score: 14.88407656218885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-time in-between motion generation is universally required in games and
highly desirable in existing animation pipelines. Its core challenge lies in
the need to satisfy three critical conditions simultaneously: quality,
controllability and speed, which renders any methods that need offline
computation (or post-processing) or cannot incorporate (often unpredictable)
user control undesirable. To this end, we propose a new real-time transition
method to address the aforementioned challenges. Our approach consists of two
key components: motion manifold and conditional transitioning. The former
learns the important low-level motion features and their dynamics; while the
latter synthesizes transitions conditioned on a target frame and the desired
transition duration. We first learn a motion manifold that explicitly models
the intrinsic transition stochasticity in human motions via a multi-modal
mapping mechanism. Then, during generation, we design a transition model which
is essentially a sampling strategy to sample from the learned manifold, based
on the target frame and the aimed transition duration. We validate our method
on different datasets in tasks where no post-processing or offline computation
is allowed. Through exhaustive evaluation and comparison, we show that our
method is able to generate high-quality motions measured under multiple
metrics. Our method is also robust under various target frames (with extreme
cases).
Related papers
- Deciphering Movement: Unified Trajectory Generation Model for Multi-Agent [53.637837706712794]
We propose a Unified Trajectory Generation model, UniTraj, that processes arbitrary trajectories as masked inputs.
Specifically, we introduce a Ghost Spatial Masking (GSM) module embedded within a Transformer encoder for spatial feature extraction.
We benchmark three practical sports game datasets, Basketball-U, Football-U, and Soccer-U, for evaluation.
arXiv Detail & Related papers (2024-05-27T22:15:23Z) - HumMUSS: Human Motion Understanding using State Space Models [6.821961232645209]
We propose a novel attention-free model for human motion understanding building upon recent advancements in state space models.
Our model supports both offline and real-time applications.
For real-time sequential prediction, our model is both memory efficient and several times faster than transformer-based approaches.
arXiv Detail & Related papers (2024-04-16T19:59:21Z) - DiffusionPhase: Motion Diffusion in Frequency Domain [69.811762407278]
We introduce a learning-based method for generating high-quality human motion sequences from text descriptions.
Existing techniques struggle with motion diversity and smooth transitions in generating arbitrary-length motion sequences.
We develop a network encoder that converts the motion space into a compact yet expressive parameterized phase space.
arXiv Detail & Related papers (2023-12-07T04:39:22Z) - Motion In-Betweening with Phase Manifolds [29.673541655825332]
This paper introduces a novel data-driven motion in-betweening system to reach target poses of characters by making use of phases variables learned by a Periodic Autoencoder.
Our approach utilizes a mixture-of-experts neural network model, in which the phases cluster movements in both space and time with different expert weights.
arXiv Detail & Related papers (2023-08-24T12:56:39Z) - Synthesizing Long-Term Human Motions with Diffusion Models via Coherent
Sampling [74.62570964142063]
Text-to-motion generation has gained increasing attention, but most existing methods are limited to generating short-term motions.
We propose a novel approach that utilizes a past-conditioned diffusion model with two optional coherent sampling methods.
Our proposed method is capable of generating compositional and coherent long-term 3D human motions controlled by a user-instructed long text stream.
arXiv Detail & Related papers (2023-08-03T16:18:32Z) - RSMT: Real-time Stylized Motion Transition for Characters [15.856276818061891]
We propose a Real-time Stylized Motion Transition method (RSMT) to achieve all aforementioned goals.
Our method consists of two critical, independent components: a general motion manifold model and a style motion sampler.
Our method proves to be fast, high-quality, versatile, and controllable.
arXiv Detail & Related papers (2023-06-21T01:50:04Z) - Weakly-supervised Action Transition Learning for Stochastic Human Motion
Prediction [81.94175022575966]
We introduce the task of action-driven human motion prediction.
It aims to predict multiple plausible future motions given a sequence of action labels and a short motion history.
arXiv Detail & Related papers (2022-05-31T08:38:07Z) - Transition Motion Tensor: A Data-Driven Approach for Versatile and
Controllable Agents in Physically Simulated Environments [6.8438089867929905]
This paper proposes a data-driven framework that creates novel and physically accurate transitions outside of the motion dataset.
It enables simulated characters to adopt new motion skills efficiently and robustly without modifying existing ones.
arXiv Detail & Related papers (2021-11-30T02:17:25Z) - Robust Motion In-betweening [17.473287573543065]
We present a novel, robust transition generation technique that can serve as a new tool for 3D animators.
The system synthesizes high-quality motions that use temporally-sparsers as animation constraints.
We present a custom MotionBuilder plugin that uses our trained model to perform in-betweening in production scenarios.
arXiv Detail & Related papers (2021-02-09T16:52:45Z) - Learning to Segment Rigid Motions from Two Frames [72.14906744113125]
We propose a modular network, motivated by a geometric analysis of what independent object motions can be recovered from an egomotion field.
It takes two consecutive frames as input and predicts segmentation masks for the background and multiple rigidly moving objects, which are then parameterized by 3D rigid transformations.
Our method achieves state-of-the-art performance for rigid motion segmentation on KITTI and Sintel.
arXiv Detail & Related papers (2021-01-11T04:20:30Z) - Continuous Transition: Improving Sample Efficiency for Continuous
Control Problems via MixUp [119.69304125647785]
This paper introduces a concise yet powerful method to construct Continuous Transition.
Specifically, we propose to synthesize new transitions for training by linearly interpolating the consecutive transitions.
To keep the constructed transitions authentic, we also develop a discriminator to guide the construction process automatically.
arXiv Detail & Related papers (2020-11-30T01:20:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.