Improving Human Motion Prediction Through Continual Learning
- URL: http://arxiv.org/abs/2107.00544v1
- Date: Thu, 1 Jul 2021 15:34:41 GMT
- Title: Improving Human Motion Prediction Through Continual Learning
- Authors: Mohammad Samin Yasar and Tariq Iqbal
- Abstract summary: Human motion prediction is an essential component for enabling closer human-robot collaboration.
It is compounded by the variability of human motion, both at a skeletal level due to the varying size of humans and at a motion level due to individual movement idiosyncrasies.
We propose a modular sequence learning approach that allows end-to-end training while also having the flexibility of being fine-tuned.
- Score: 2.720960618356385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human motion prediction is an essential component for enabling closer
human-robot collaboration. The task of accurately predicting human motion is
non-trivial. It is compounded by the variability of human motion, both at a
skeletal level due to the varying size of humans and at a motion level due to
individual movement's idiosyncrasies. These variables make it challenging for
learning algorithms to obtain a general representation that is robust to the
diverse spatio-temporal patterns of human motion. In this work, we propose a
modular sequence learning approach that allows end-to-end training while also
having the flexibility of being fine-tuned. Our approach relies on the
diversity of training samples to first learn a robust representation, which can
then be fine-tuned in a continual learning setup to predict the motion of new
subjects. We evaluated the proposed approach by comparing its performance
against state-of-the-art baselines. The results suggest that our approach
outperforms other methods over all the evaluated temporal horizons, using a
small amount of data for fine-tuning. The improved performance of our approach
opens up the possibility of using continual learning for personalized and
reliable motion prediction.
Related papers
- MotionRL: Align Text-to-Motion Generation to Human Preferences with Multi-Reward Reinforcement Learning [99.09906827676748]
We introduce MotionRL, the first approach to utilize Multi-Reward Reinforcement Learning (RL) for optimizing text-to-motion generation tasks.
Our novel approach uses reinforcement learning to fine-tune the motion generator based on human preferences prior knowledge of the human perception model.
In addition, MotionRL introduces a novel multi-objective optimization strategy to approximate optimality between text adherence, motion quality, and human preferences.
arXiv Detail & Related papers (2024-10-09T03:27:14Z) - Aligning Human Motion Generation with Human Perceptions [51.831338643012444]
We propose a data-driven approach to bridge the gap by introducing a large-scale human perceptual evaluation dataset, MotionPercept, and a human motion critic model, MotionCritic.
Our critic model offers a more accurate metric for assessing motion quality and could be readily integrated into the motion generation pipeline.
arXiv Detail & Related papers (2024-07-02T14:01:59Z) - Continual Imitation Learning for Prosthetic Limbs [0.7922558880545526]
Motorized bionic limbs offer promise, but their utility depends on mimicking the evolving synergy of human movement in various settings.
We present a novel model for bionic prostheses' application that leverages camera-based motion capture and wearable sensor data.
We propose a model that can multitask, adapt continually, anticipate movements, and refine locomotion.
arXiv Detail & Related papers (2024-05-02T09:22:54Z) - AdvMT: Adversarial Motion Transformer for Long-term Human Motion
Prediction [2.837740438355204]
We present the Adversarial Motion Transformer (AdvMT), a novel model that integrates a transformer-based motion encoder and a temporal continuity discriminator.
With adversarial training, our method effectively reduces the unwanted artifacts in predictions, thereby ensuring the learning of more realistic and fluid human motions.
arXiv Detail & Related papers (2024-01-10T09:15:50Z) - InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint [67.6297384588837]
We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs.
We demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model.
arXiv Detail & Related papers (2023-11-27T14:32:33Z) - TransFusion: A Practical and Effective Transformer-based Diffusion Model
for 3D Human Motion Prediction [1.8923948104852863]
We propose TransFusion, an innovative and practical diffusion-based model for 3D human motion prediction.
Our model leverages Transformer as the backbone with long skip connections between shallow and deep layers.
In contrast to prior diffusion-based models that utilize extra modules like cross-attention and adaptive layer normalization, we treat all inputs, including conditions, as tokens to create a more lightweight model.
arXiv Detail & Related papers (2023-07-30T01:52:07Z) - Learning Human Motion Prediction via Stochastic Differential Equations [19.30774202476477]
We propose a novel approach in modeling the motion prediction problem based on differential equations and path integrals.
It achieves a 12.48% accuracy improvement over current state-of-the-art methods in average.
arXiv Detail & Related papers (2021-12-21T11:55:13Z) - Dyadic Human Motion Prediction [119.3376964777803]
We introduce a motion prediction framework that explicitly reasons about the interactions of two observed subjects.
Specifically, we achieve this by introducing a pairwise attention mechanism that models the mutual dependencies in the motion history of the two subjects.
This allows us to preserve the long-term motion dynamics in a more realistic way and more robustly predict unusual and fast-paced movements.
arXiv Detail & Related papers (2021-12-01T10:30:40Z) - Generating Smooth Pose Sequences for Diverse Human Motion Prediction [90.45823619796674]
We introduce a unified deep generative network for both diverse and controllable motion prediction.
Our experiments on two standard benchmark datasets, Human3.6M and HumanEva-I, demonstrate that our approach outperforms the state-of-the-art baselines in terms of both sample diversity and accuracy.
arXiv Detail & Related papers (2021-08-19T00:58:00Z) - Probabilistic Human Motion Prediction via A Bayesian Neural Network [71.16277790708529]
We propose a probabilistic model for human motion prediction in this paper.
Our model could generate several future motions when given an observed motion sequence.
We extensively validate our approach on a large scale benchmark dataset Human3.6m.
arXiv Detail & Related papers (2021-07-14T09:05:33Z) - 3D Human motion anticipation and classification [8.069283749930594]
We propose a novel sequence-to-sequence model for human motion prediction and feature learning.
Our model learns to predict multiple future sequences of human poses from the same input sequence.
We show that it takes less than half the number of epochs to train an activity recognition network by using the feature learned from the discriminator.
arXiv Detail & Related papers (2020-12-31T00:19:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.