Generative Tweening: Long-term Inbetweening of 3D Human Motions
- URL: http://arxiv.org/abs/2005.08891v2
- Date: Thu, 28 May 2020 05:36:46 GMT
- Title: Generative Tweening: Long-term Inbetweening of 3D Human Motions
- Authors: Yi Zhou, Jingwan Lu, Connelly Barnes, Jimei Yang, Sitao Xiang, Hao li
- Abstract summary: We introduce a biomechanically constrained generative adversarial network that performs long-term inbetweening of human motions.
We trained with 79 classes of captured motion data, our network performs robustly on a variety of highly complex motion styles.
- Score: 40.16462039509098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to generate complex and realistic human body animations at scale,
while following specific artistic constraints, has been a fundamental goal for
the game and animation industry for decades. Popular techniques include
key-framing, physics-based simulation, and database methods via motion graphs.
Recently, motion generators based on deep learning have been introduced.
Although these learning models can automatically generate highly intricate
stylized motions of arbitrary length, they still lack user control. To this
end, we introduce the problem of long-term inbetweening, which involves
automatically synthesizing complex motions over a long time interval given very
sparse keyframes by users. We identify a number of challenges related to this
problem, including maintaining biomechanical and keyframe constraints,
preserving natural motions, and designing the entire motion sequence
holistically while considering all constraints. We introduce a biomechanically
constrained generative adversarial network that performs long-term inbetweening
of human motions, conditioned on keyframe constraints. This network uses a
novel two-stage approach where it first predicts local motion in the form of
joint angles, and then predicts global motion, i.e. the global path that the
character follows. Since there are typically a number of possible motions that
could satisfy the given user constraints, we also enable our network to
generate a variety of outputs with a scheme that we call Motion DNA. This
approach allows the user to manipulate and influence the output content by
feeding seed motions (DNA) to the network. Trained with 79 classes of captured
motion data, our network performs robustly on a variety of highly complex
motion styles.
Related papers
- InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint [67.6297384588837]
We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs.
We demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model.
arXiv Detail & Related papers (2023-11-27T14:32:33Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Motion In-Betweening with Phase Manifolds [29.673541655825332]
This paper introduces a novel data-driven motion in-betweening system to reach target poses of characters by making use of phases variables learned by a Periodic Autoencoder.
Our approach utilizes a mixture-of-experts neural network model, in which the phases cluster movements in both space and time with different expert weights.
arXiv Detail & Related papers (2023-08-24T12:56:39Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - GANimator: Neural Motion Synthesis from a Single Sequence [38.361579401046875]
We present GANimator, a generative model that learns to synthesize novel motions from a single, short motion sequence.
GANimator generates motions that resemble the core elements of the original motion, while simultaneously synthesizing novel and diverse movements.
We show a number of applications, including crowd simulation, key-frame editing, style transfer, and interactive control, which all learn from a single input sequence.
arXiv Detail & Related papers (2022-05-05T13:04:14Z) - The Wanderings of Odysseus in 3D Scenes [22.230079422580065]
We propose generative motion primitives via body surface markers, shortened as GAMMA.
We exploit body surface markers and conditional variational autoencoder to model each motion primitive.
Experiments show that our method can produce more realistic and controllable motion than state-of-the-art data-driven method.
arXiv Detail & Related papers (2021-12-16T23:24:50Z) - Task-Generic Hierarchical Human Motion Prior using VAEs [44.356707509079044]
A deep generative model that describes human motions can benefit a wide range of fundamental computer vision and graphics tasks.
We present a method for learning complex human motions independent of specific tasks using a combined global and local latent space.
We demonstrate the effectiveness of our hierarchical motion variational autoencoder in a variety of tasks including video-based human pose estimation.
arXiv Detail & Related papers (2021-06-07T23:11:42Z) - High-Fidelity Neural Human Motion Transfer from Monocular Video [71.75576402562247]
Video-based human motion transfer creates video animations of humans following a source motion.
We present a new framework which performs high-fidelity and temporally-consistent human motion transfer with natural pose-dependent non-rigid deformations.
In the experimental results, we significantly outperform the state-of-the-art in terms of video realism.
arXiv Detail & Related papers (2020-12-20T16:54:38Z) - Hierarchical Style-based Networks for Motion Synthesis [150.226137503563]
We propose a self-supervised method for generating long-range, diverse and plausible behaviors to achieve a specific goal location.
Our proposed method learns to model the motion of human by decomposing a long-range generation task in a hierarchical manner.
On large-scale skeleton dataset, we show that the proposed method is able to synthesise long-range, diverse and plausible motion.
arXiv Detail & Related papers (2020-08-24T02:11:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.