Weakly-supervised Action Transition Learning for Stochastic Human Motion
Prediction
- URL: http://arxiv.org/abs/2205.15608v1
- Date: Tue, 31 May 2022 08:38:07 GMT
- Title: Weakly-supervised Action Transition Learning for Stochastic Human Motion
Prediction
- Authors: Wei Mao and Miaomiao Liu and Mathieu Salzmann
- Abstract summary: We introduce the task of action-driven human motion prediction.
It aims to predict multiple plausible future motions given a sequence of action labels and a short motion history.
- Score: 81.94175022575966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the task of action-driven stochastic human motion prediction,
which aims to predict multiple plausible future motions given a sequence of
action labels and a short motion history. This differs from existing works,
which predict motions that either do not respect any specific action category,
or follow a single action label. In particular, addressing this task requires
tackling two challenges: The transitions between the different actions must be
smooth; the length of the predicted motion depends on the action sequence and
varies significantly across samples. As we cannot realistically expect training
data to cover sufficiently diverse action transitions and motion lengths, we
propose an effective training strategy consisting of combining multiple motions
from different actions and introducing a weak form of supervision to encourage
smooth transitions. We then design a VAE-based model conditioned on both the
observed motion and the action label sequence, allowing us to generate multiple
plausible future motions of varying length. We illustrate the generality of our
approach by exploring its use with two different temporal encoding models,
namely RNNs and Transformers. Our approach outperforms baseline models
constructed by adapting state-of-the-art single action-conditioned motion
generation methods and stochastic human motion prediction approaches to our new
task of action-driven stochastic motion prediction. Our code is available at
https://github.com/wei-mao-2019/WAT.
Related papers
- Orientation-Aware Leg Movement Learning for Action-Driven Human Motion
Prediction [7.150292351809277]
Action-driven human motion prediction aims to forecast future human motion based on the observed sequence.
It requires modeling the smooth yet realistic transition between multiple action labels.
We generalize our trained in-betweening learning model on one dataset to two unseen large-scale motion datasets to produce natural transitions.
arXiv Detail & Related papers (2023-10-23T13:16:51Z) - DiverseMotion: Towards Diverse Human Motion Generation via Discrete
Diffusion [70.33381660741861]
We present DiverseMotion, a new approach for synthesizing high-quality human motions conditioned on textual descriptions.
We show that our DiverseMotion achieves the state-of-the-art motion quality and competitive motion diversity.
arXiv Detail & Related papers (2023-09-04T05:43:48Z) - Motion In-Betweening with Phase Manifolds [29.673541655825332]
This paper introduces a novel data-driven motion in-betweening system to reach target poses of characters by making use of phases variables learned by a Periodic Autoencoder.
Our approach utilizes a mixture-of-experts neural network model, in which the phases cluster movements in both space and time with different expert weights.
arXiv Detail & Related papers (2023-08-24T12:56:39Z) - Learning Snippet-to-Motion Progression for Skeleton-based Human Motion
Prediction [14.988322340164391]
Existing Graph Convolutional Networks to achieve human motion prediction largely adopt a one-step scheme.
We observe that human motions have transitional patterns and can be split into snippets representative of each transition.
We propose a snippet-to-motion multi-stage framework that breaks motion prediction into sub-tasks easier to accomplish.
arXiv Detail & Related papers (2023-07-26T07:36:38Z) - HumanMAC: Masked Motion Completion for Human Motion Prediction [62.279925754717674]
Human motion prediction is a classical problem in computer vision and computer graphics.
Previous effects achieve great empirical performance based on an encoding-decoding style.
In this paper, we propose a novel framework from a new perspective.
arXiv Detail & Related papers (2023-02-07T18:34:59Z) - Executing your Commands via Motion Diffusion in Latent Space [51.64652463205012]
We propose a Motion Latent-based Diffusion model (MLD) to produce vivid motion sequences conforming to the given conditional inputs.
Our MLD achieves significant improvements over the state-of-the-art methods among extensive human motion generation tasks.
arXiv Detail & Related papers (2022-12-08T03:07:00Z) - Stochastic Trajectory Prediction via Motion Indeterminacy Diffusion [88.45326906116165]
We present a new framework to formulate the trajectory prediction task as a reverse process of motion indeterminacy diffusion (MID)
We encode the history behavior information and the social interactions as a state embedding and devise a Transformer-based diffusion model to capture the temporal dependencies of trajectories.
Experiments on the human trajectory prediction benchmarks including the Stanford Drone and ETH/UCY datasets demonstrate the superiority of our method.
arXiv Detail & Related papers (2022-03-25T16:59:08Z) - Generating Smooth Pose Sequences for Diverse Human Motion Prediction [90.45823619796674]
We introduce a unified deep generative network for both diverse and controllable motion prediction.
Our experiments on two standard benchmark datasets, Human3.6M and HumanEva-I, demonstrate that our approach outperforms the state-of-the-art baselines in terms of both sample diversity and accuracy.
arXiv Detail & Related papers (2021-08-19T00:58:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.