Learning Snippet-to-Motion Progression for Skeleton-based Human Motion
Prediction
- URL: http://arxiv.org/abs/2307.14006v1
- Date: Wed, 26 Jul 2023 07:36:38 GMT
- Title: Learning Snippet-to-Motion Progression for Skeleton-based Human Motion
Prediction
- Authors: Xinshun Wang, Qiongjie Cui, Chen Chen, Shen Zhao, Mengyuan Liu
- Abstract summary: Existing Graph Convolutional Networks to achieve human motion prediction largely adopt a one-step scheme.
We observe that human motions have transitional patterns and can be split into snippets representative of each transition.
We propose a snippet-to-motion multi-stage framework that breaks motion prediction into sub-tasks easier to accomplish.
- Score: 14.988322340164391
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing Graph Convolutional Networks to achieve human motion prediction
largely adopt a one-step scheme, which output the prediction straight from
history input, failing to exploit human motion patterns. We observe that human
motions have transitional patterns and can be split into snippets
representative of each transition. Each snippet can be reconstructed from its
starting and ending poses referred to as the transitional poses. We propose a
snippet-to-motion multi-stage framework that breaks motion prediction into
sub-tasks easier to accomplish. Each sub-task integrates three modules:
transitional pose prediction, snippet reconstruction, and snippet-to-motion
prediction. Specifically, we propose to first predict only the transitional
poses. Then we use them to reconstruct the corresponding snippets, obtaining a
close approximation to the true motion sequence. Finally we refine them to
produce the final prediction output. To implement the network, we propose a
novel unified graph modeling, which allows for direct and effective feature
propagation compared to existing approaches which rely on separate space-time
modeling. Extensive experiments on Human 3.6M, CMU Mocap and 3DPW datasets
verify the effectiveness of our method which achieves state-of-the-art
performance.
Related papers
- Past Movements-Guided Motion Representation Learning for Human Motion Prediction [0.0]
We propose a self-supervised learning framework designed to enhance motion representation.
The framework consists of two stages: first, the network is pretrained through the self-reconstruction of past sequences, and the guided reconstruction of future sequences based on past movements.
Our method reduces the average prediction errors by 8.8% across Human3.6, 3DPW, and AMASS datasets.
arXiv Detail & Related papers (2024-08-04T17:00:37Z) - Humanoid Locomotion as Next Token Prediction [84.21335675130021]
Our model is a causal transformer trained via autoregressive prediction of sensorimotor trajectories.
We show that our model enables a full-sized humanoid to walk in San Francisco zero-shot.
Our model can transfer to the real world even when trained on only 27 hours of walking data, and can generalize commands not seen during training like walking backward.
arXiv Detail & Related papers (2024-02-29T18:57:37Z) - DMMGAN: Diverse Multi Motion Prediction of 3D Human Joints using
Attention-Based Generative Adverserial Network [9.247294820004143]
We propose a transformer-based generative model for forecasting multiple diverse human motions.
Our model first predicts the pose of the body relative to the hip joint. Then the textitHip Prediction Module predicts the trajectory of the hip movement for each predicted pose frame.
We show that our system outperforms the state-of-the-art in human motion prediction while it can predict diverse multi-motion future trajectories with hip movements.
arXiv Detail & Related papers (2022-09-13T23:22:33Z) - Weakly-supervised Action Transition Learning for Stochastic Human Motion
Prediction [81.94175022575966]
We introduce the task of action-driven human motion prediction.
It aims to predict multiple plausible future motions given a sequence of action labels and a short motion history.
arXiv Detail & Related papers (2022-05-31T08:38:07Z) - Investigating Pose Representations and Motion Contexts Modeling for 3D
Motion Prediction [63.62263239934777]
We conduct an indepth study on various pose representations with a focus on their effects on the motion prediction task.
We propose a novel RNN architecture termed AHMR (Attentive Hierarchical Motion Recurrent network) for motion prediction.
Our approach outperforms the state-of-the-art methods in short-term prediction and achieves much enhanced long-term prediction proficiency.
arXiv Detail & Related papers (2021-12-30T10:45:22Z) - Generating Smooth Pose Sequences for Diverse Human Motion Prediction [90.45823619796674]
We introduce a unified deep generative network for both diverse and controllable motion prediction.
Our experiments on two standard benchmark datasets, Human3.6M and HumanEva-I, demonstrate that our approach outperforms the state-of-the-art baselines in terms of both sample diversity and accuracy.
arXiv Detail & Related papers (2021-08-19T00:58:00Z) - Multi-level Motion Attention for Human Motion Prediction [132.29963836262394]
We study the use of different types of attention, computed at joint, body part, and full pose levels.
Our experiments on Human3.6M, AMASS and 3DPW validate the benefits of our approach for both periodical and non-periodical actions.
arXiv Detail & Related papers (2021-06-17T08:08:11Z) - TrajeVAE -- Controllable Human Motion Generation from Trajectories [20.531400859656042]
We propose a novel transformer-like architecture, TrajeVAE, that provides a versatile framework for 3D human animation.
We show that TrajeVAE outperforms trajectory-based reference approaches and methods that base their predictions on past poses in terms of accuracy.
arXiv Detail & Related papers (2021-04-01T09:12:48Z) - History Repeats Itself: Human Motion Prediction via Motion Attention [81.94175022575966]
We introduce an attention-based feed-forward network that explicitly leverages the observation that human motion tends to repeat itself.
In particular, we propose to extract motion attention to capture the similarity between the current motion context and the historical motion sub-sequences.
Our experiments on Human3.6M, AMASS and 3DPW evidence the benefits of our approach for both periodical and non-periodical actions.
arXiv Detail & Related papers (2020-07-23T02:12:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.