Few-shot human motion prediction for heterogeneous sensors
- URL: http://arxiv.org/abs/2212.11771v2
- Date: Mon, 20 Mar 2023 15:05:48 GMT
- Title: Few-shot human motion prediction for heterogeneous sensors
- Authors: Rafael Rego Drumond, Lukas Brinkmeyer and Lars Schmidt-Thieme
- Abstract summary: We introduce the first few-shot motion approach that explicitly incorporates the spatial graph.
We show that our model can perform on par with the best approach so far when evaluating on tasks with a fixed output space.
- Score: 5.210197476419621
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human motion prediction is a complex task as it involves forecasting
variables over time on a graph of connected sensors. This is especially true in
the case of few-shot learning, where we strive to forecast motion sequences for
previously unseen actions based on only a few examples. Despite this, almost
all related approaches for few-shot motion prediction do not incorporate the
underlying graph, while it is a common component in classical motion
prediction. Furthermore, state-of-the-art methods for few-shot motion
prediction are restricted to motion tasks with a fixed output space meaning
these tasks are all limited to the same sensor graph. In this work, we propose
to extend recent works on few-shot time-series forecasting with heterogeneous
attributes with graph neural networks to introduce the first few-shot motion
approach that explicitly incorporates the spatial graph while also generalizing
across motion tasks with heterogeneous sensors. In our experiments on motion
tasks with heterogeneous sensors, we demonstrate significant performance
improvements with lifts from 10.4% up to 39.3% compared to best
state-of-the-art models. Moreover, we show that our model can perform on par
with the best approach so far when evaluating on tasks with a fixed output
space while maintaining two magnitudes fewer parameters.
Related papers
- Mutual Information-Based Temporal Difference Learning for Human Pose
Estimation in Video [16.32910684198013]
We present a novel multi-frame human pose estimation framework, which employs temporal differences across frames to model dynamic contexts.
To be specific, we design a multi-stage entangled learning sequences conditioned on multi-stage differences to derive informative motion representation sequences.
These place us to rank No.1 in the Crowd Pose Estimation in Complex Events Challenge on benchmark HiEve.
arXiv Detail & Related papers (2023-03-15T09:29:03Z) - STGlow: A Flow-based Generative Framework with Dual Graphormer for
Pedestrian Trajectory Prediction [22.553356096143734]
We propose a novel generative flow based framework with dual graphormer for pedestrian trajectory prediction (STGlow)
Our method can more precisely model the underlying data distribution by optimizing the exact log-likelihood of motion behaviors.
Experimental results on several benchmarks demonstrate that our method achieves much better performance compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-21T07:29:24Z) - DisenHCN: Disentangled Hypergraph Convolutional Networks for
Spatiotemporal Activity Prediction [53.76601630407521]
We propose a hypergraph network model called DisenHCN to bridge the gaps in existing solutions.
In particular, we first unify fine-grained user similarity and the complex matching between user preferences andtemporal activity into a heterogeneous hypergraph.
We then disentangle the user representations into different aspects (location-aware, time-aware, and activity-aware) and aggregate corresponding aspect's features on the constructed hypergraph.
arXiv Detail & Related papers (2022-08-14T06:51:54Z) - Weakly-supervised Action Transition Learning for Stochastic Human Motion
Prediction [81.94175022575966]
We introduce the task of action-driven human motion prediction.
It aims to predict multiple plausible future motions given a sequence of action labels and a short motion history.
arXiv Detail & Related papers (2022-05-31T08:38:07Z) - Investigating Pose Representations and Motion Contexts Modeling for 3D
Motion Prediction [63.62263239934777]
We conduct an indepth study on various pose representations with a focus on their effects on the motion prediction task.
We propose a novel RNN architecture termed AHMR (Attentive Hierarchical Motion Recurrent network) for motion prediction.
Our approach outperforms the state-of-the-art methods in short-term prediction and achieves much enhanced long-term prediction proficiency.
arXiv Detail & Related papers (2021-12-30T10:45:22Z) - Dyadic Human Motion Prediction [119.3376964777803]
We introduce a motion prediction framework that explicitly reasons about the interactions of two observed subjects.
Specifically, we achieve this by introducing a pairwise attention mechanism that models the mutual dependencies in the motion history of the two subjects.
This allows us to preserve the long-term motion dynamics in a more realistic way and more robustly predict unusual and fast-paced movements.
arXiv Detail & Related papers (2021-12-01T10:30:40Z) - Generating Smooth Pose Sequences for Diverse Human Motion Prediction [90.45823619796674]
We introduce a unified deep generative network for both diverse and controllable motion prediction.
Our experiments on two standard benchmark datasets, Human3.6M and HumanEva-I, demonstrate that our approach outperforms the state-of-the-art baselines in terms of both sample diversity and accuracy.
arXiv Detail & Related papers (2021-08-19T00:58:00Z) - SGCN:Sparse Graph Convolution Network for Pedestrian Trajectory
Prediction [64.16212996247943]
We present a Sparse Graph Convolution Network(SGCN) for pedestrian trajectory prediction.
Specifically, the SGCN explicitly models the sparse directed interaction with a sparse directed spatial graph to capture adaptive interaction pedestrians.
visualizations indicate that our method can capture adaptive interactions between pedestrians and their effective motion tendencies.
arXiv Detail & Related papers (2021-04-04T03:17:42Z) - Multi-grained Trajectory Graph Convolutional Networks for
Habit-unrelated Human Motion Prediction [4.070072825448614]
A multigrained graph convolutional networks based lightweight framework is proposed for habit-unrelated human motion prediction.
A new motion generation method is proposed to generate the motion with left-handedness, to better model the motion with less bias to the human habit.
Experimental results on challenging datasets, including Humantemporal3.6M and CMU Mocap, show that the proposed model outperforms state-of-the-art with less than 0.12 times parameters.
arXiv Detail & Related papers (2020-12-23T09:41:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.