Multi-grained Trajectory Graph Convolutional Networks for
Habit-unrelated Human Motion Prediction
- URL: http://arxiv.org/abs/2012.12558v1
- Date: Wed, 23 Dec 2020 09:41:50 GMT
- Title: Multi-grained Trajectory Graph Convolutional Networks for
Habit-unrelated Human Motion Prediction
- Authors: Jin Liu, Jianqin Yin
- Abstract summary: A multigrained graph convolutional networks based lightweight framework is proposed for habit-unrelated human motion prediction.
A new motion generation method is proposed to generate the motion with left-handedness, to better model the motion with less bias to the human habit.
Experimental results on challenging datasets, including Humantemporal3.6M and CMU Mocap, show that the proposed model outperforms state-of-the-art with less than 0.12 times parameters.
- Score: 4.070072825448614
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human motion prediction is an essential part for human-robot collaboration.
Unlike most of the existing methods mainly focusing on improving the
effectiveness of spatiotemporal modeling for accurate prediction, we take
effectiveness and efficiency into consideration, aiming at the prediction
quality, computational efficiency and the lightweight of the model. A
multi-grained trajectory graph convolutional networks based and lightweight
framework is proposed for habit-unrelated human motion prediction.
Specifically, we represent human motion as multi-grained trajectories,
including joint trajectory and sub-joint trajectory. Based on the advanced
representation, multi-grained trajectory graph convolutional networks are
proposed to explore the spatiotemporal dependencies at the multiple
granularities. Moreover, considering the right-handedness habit of the vast
majority of people, a new motion generation method is proposed to generate the
motion with left-handedness, to better model the motion with less bias to the
human habit. Experimental results on challenging datasets, including Human3.6M
and CMU Mocap, show that the proposed model outperforms state-of-the-art with
less than 0.12 times parameters, which demonstrates the effectiveness and
efficiency of our proposed method.
Related papers
- MotionRL: Align Text-to-Motion Generation to Human Preferences with Multi-Reward Reinforcement Learning [99.09906827676748]
We introduce MotionRL, the first approach to utilize Multi-Reward Reinforcement Learning (RL) for optimizing text-to-motion generation tasks.
Our novel approach uses reinforcement learning to fine-tune the motion generator based on human preferences prior knowledge of the human perception model.
In addition, MotionRL introduces a novel multi-objective optimization strategy to approximate optimality between text adherence, motion quality, and human preferences.
arXiv Detail & Related papers (2024-10-09T03:27:14Z) - MDMP: Multi-modal Diffusion for supervised Motion Predictions with uncertainty [7.402769693163035]
This paper introduces a Multi-modal Diffusion model for Motion Prediction (MDMP)
It integrates skeletal data and textual descriptions of actions to generate refined long-term motion predictions with quantifiable uncertainty.
Our model consistently outperforms existing generative techniques in accurately predicting long-term motions.
arXiv Detail & Related papers (2024-10-04T18:49:00Z) - MoManifold: Learning to Measure 3D Human Motion via Decoupled Joint Acceleration Manifolds [20.83684434910106]
We present MoManifold, a novel human motion prior, which models plausible human motion in continuous high-dimensional motion space.
Specifically, we propose novel decoupled joint acceleration to model human dynamics from existing limited motion data.
Extensive experiments demonstrate that MoManifold outperforms existing SOTAs as a prior in several downstream tasks.
arXiv Detail & Related papers (2024-09-01T15:00:16Z) - HuTuMotion: Human-Tuned Navigation of Latent Motion Diffusion Models
with Minimal Feedback [46.744192144648764]
HuTuMotion is an innovative approach for generating natural human motions that navigates latent motion diffusion models by leveraging few-shot human feedback.
Our findings reveal that utilizing few-shot feedback can yield performance levels on par with those attained through extensive human feedback.
arXiv Detail & Related papers (2023-12-19T15:13:08Z) - Motion Flow Matching for Human Motion Synthesis and Editing [75.13665467944314]
We propose emphMotion Flow Matching, a novel generative model for human motion generation featuring efficient sampling and effectiveness in motion editing applications.
Our method reduces the sampling complexity from thousand steps in previous diffusion models to just ten steps, while achieving comparable performance in text-to-motion and action-to-motion generation benchmarks.
arXiv Detail & Related papers (2023-12-14T12:57:35Z) - GDTS: Goal-Guided Diffusion Model with Tree Sampling for Multi-Modal Pedestrian Trajectory Prediction [15.731398013255179]
We propose a novel Goal-Guided Diffusion Model with Tree Sampling for multi-modal trajectory prediction.
A two-stage tree sampling algorithm is presented, which leverages common features to reduce the inference time and improve accuracy for multi-modal prediction.
Experimental results demonstrate that our proposed framework achieves comparable state-of-the-art performance with real-time inference speed in public datasets.
arXiv Detail & Related papers (2023-11-25T03:55:06Z) - Investigating Pose Representations and Motion Contexts Modeling for 3D
Motion Prediction [63.62263239934777]
We conduct an indepth study on various pose representations with a focus on their effects on the motion prediction task.
We propose a novel RNN architecture termed AHMR (Attentive Hierarchical Motion Recurrent network) for motion prediction.
Our approach outperforms the state-of-the-art methods in short-term prediction and achieves much enhanced long-term prediction proficiency.
arXiv Detail & Related papers (2021-12-30T10:45:22Z) - Learning to Predict Diverse Human Motions from a Single Image via
Mixture Density Networks [9.06677862854201]
We propose a novel approach to predict future human motions from a single image, with mixture density networks (MDN) modeling.
Contrary to most existing deep human motion prediction approaches, the multimodal nature of MDN enables the generation of diverse future motion hypotheses.
Our trained model directly takes an image as input and generates multiple plausible motions that satisfy the given condition.
arXiv Detail & Related papers (2021-09-13T08:49:33Z) - Non-local Graph Convolutional Network for joint Activity Recognition and
Motion Prediction [2.580765958706854]
3D skeleton-based motion prediction and activity recognition are two interwoven tasks in human behaviour analysis.
We propose a new way to combine the advantages of both graph convolutional neural networks and recurrent neural networks for joint human motion prediction and activity recognition.
arXiv Detail & Related papers (2021-08-03T14:07:10Z) - Probabilistic Human Motion Prediction via A Bayesian Neural Network [71.16277790708529]
We propose a probabilistic model for human motion prediction in this paper.
Our model could generate several future motions when given an observed motion sequence.
We extensively validate our approach on a large scale benchmark dataset Human3.6m.
arXiv Detail & Related papers (2021-07-14T09:05:33Z) - SGCN:Sparse Graph Convolution Network for Pedestrian Trajectory
Prediction [64.16212996247943]
We present a Sparse Graph Convolution Network(SGCN) for pedestrian trajectory prediction.
Specifically, the SGCN explicitly models the sparse directed interaction with a sparse directed spatial graph to capture adaptive interaction pedestrians.
visualizations indicate that our method can capture adaptive interactions between pedestrians and their effective motion tendencies.
arXiv Detail & Related papers (2021-04-04T03:17:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.