Perpetual Motion: Generating Unbounded Human Motion
- URL: http://arxiv.org/abs/2007.13886v1
- Date: Mon, 27 Jul 2020 21:50:36 GMT
- Title: Perpetual Motion: Generating Unbounded Human Motion
- Authors: Yan Zhang and Michael J. Black and Siyu Tang
- Abstract summary: We focus on long-term prediction; that is, generating long sequences of human motion that is plausible.
We propose a model to generate non-deterministic, textitever-changing, perpetual human motion.
We train this using a heavy-tailed function of the KL divergence of a white-noise Gaussian process, allowing latent sequence temporal dependency.
- Score: 61.40259979876424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The modeling of human motion using machine learning methods has been widely
studied. In essence it is a time-series modeling problem involving predicting
how a person will move in the future given how they moved in the past. Existing
methods, however, typically have a short time horizon, predicting a only few
frames to a few seconds of human motion. Here we focus on long-term prediction;
that is, generating long sequences (potentially infinite) of human motion that
is plausible. Furthermore, we do not rely on a long sequence of input motion
for conditioning, but rather, can predict how someone will move from as little
as a single pose. Such a model has many uses in graphics (video games and crowd
animation) and vision (as a prior for human motion estimation or for dataset
creation). To address this problem, we propose a model to generate
non-deterministic, \textit{ever-changing}, perpetual human motion, in which the
global trajectory and the body pose are cross-conditioned. We introduce a novel
KL-divergence term with an implicit, unknown, prior. We train this using a
heavy-tailed function of the KL divergence of a white-noise Gaussian process,
allowing latent sequence temporal dependency. We perform systematic experiments
to verify its effectiveness and find that it is superior to baseline methods.
Related papers
- DMMGAN: Diverse Multi Motion Prediction of 3D Human Joints using
Attention-Based Generative Adverserial Network [9.247294820004143]
We propose a transformer-based generative model for forecasting multiple diverse human motions.
Our model first predicts the pose of the body relative to the hip joint. Then the textitHip Prediction Module predicts the trajectory of the hip movement for each predicted pose frame.
We show that our system outperforms the state-of-the-art in human motion prediction while it can predict diverse multi-motion future trajectories with hip movements.
arXiv Detail & Related papers (2022-09-13T23:22:33Z) - 3D Skeleton-based Human Motion Prediction with Manifold-Aware GAN [3.1313293632309827]
We propose a novel solution for 3D skeleton-based human motion prediction.
We build a manifold-aware Wasserstein generative adversarial model that captures the temporal and spatial dependencies of human motion.
Experiments have been conducted on CMU MoCap and Human 3.6M datasets.
arXiv Detail & Related papers (2022-03-01T20:49:13Z) - Investigating Pose Representations and Motion Contexts Modeling for 3D
Motion Prediction [63.62263239934777]
We conduct an indepth study on various pose representations with a focus on their effects on the motion prediction task.
We propose a novel RNN architecture termed AHMR (Attentive Hierarchical Motion Recurrent network) for motion prediction.
Our approach outperforms the state-of-the-art methods in short-term prediction and achieves much enhanced long-term prediction proficiency.
arXiv Detail & Related papers (2021-12-30T10:45:22Z) - Generating Smooth Pose Sequences for Diverse Human Motion Prediction [90.45823619796674]
We introduce a unified deep generative network for both diverse and controllable motion prediction.
Our experiments on two standard benchmark datasets, Human3.6M and HumanEva-I, demonstrate that our approach outperforms the state-of-the-art baselines in terms of both sample diversity and accuracy.
arXiv Detail & Related papers (2021-08-19T00:58:00Z) - 3D Human motion anticipation and classification [8.069283749930594]
We propose a novel sequence-to-sequence model for human motion prediction and feature learning.
Our model learns to predict multiple future sequences of human poses from the same input sequence.
We show that it takes less than half the number of epochs to train an activity recognition network by using the feature learned from the discriminator.
arXiv Detail & Related papers (2020-12-31T00:19:39Z) - Long Term Motion Prediction Using Keyposes [122.22758311506588]
We argue that, to achieve long term forecasting, predicting human pose at every time instant is unnecessary.
We call such poses "keyposes", and approximate complex motions by linearly interpolating between subsequent keyposes.
We show that learning the sequence of such keyposes allows us to predict very long term motion, up to 5 seconds in the future.
arXiv Detail & Related papers (2020-12-08T20:45:51Z) - We are More than Our Joints: Predicting how 3D Bodies Move [63.34072043909123]
We train a novel variational autoencoder that generates motions from latent frequencies.
Experiments show that our method produces state-of-the-art results and realistic 3D body animations.
arXiv Detail & Related papers (2020-12-01T16:41:04Z) - Motion Prediction Using Temporal Inception Module [96.76721173517895]
We propose a Temporal Inception Module (TIM) to encode human motion.
Our framework produces input embeddings using convolutional layers, by using different kernel sizes for different input lengths.
The experimental results on standard motion prediction benchmark datasets Human3.6M and CMU motion capture dataset show that our approach consistently outperforms the state of the art methods.
arXiv Detail & Related papers (2020-10-06T20:26:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.