Spatial-Temporal Gating-Adjacency GCN for Human Motion Prediction
- URL: http://arxiv.org/abs/2203.01474v1
- Date: Thu, 3 Mar 2022 01:20:24 GMT
- Title: Spatial-Temporal Gating-Adjacency GCN for Human Motion Prediction
- Authors: Chongyang Zhong, Lei Hu, Zihao Zhang, Yongjing Ye, Shihong Xia
- Abstract summary: We propose the Spatial-Temporal Gating-Adjacency GCN to learn the complex spatial-temporal dependencies over diverse action types.
GAGCN achieves state-of-the-art performance in both short-term and long-term predictions.
- Score: 14.42671575251554
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Predicting future motion based on historical motion sequence is a fundamental
problem in computer vision, and it has wide applications in autonomous driving
and robotics. Some recent works have shown that Graph Convolutional
Networks(GCN) are instrumental in modeling the relationship between different
joints. However, considering the variants and diverse action types in human
motion data, the cross-dependency of the spatial-temporal relationships will be
difficult to depict due to the decoupled modeling strategy, which may also
exacerbate the problem of insufficient generalization. Therefore, we propose
the Spatial-Temporal Gating-Adjacency GCN(GAGCN) to learn the complex
spatial-temporal dependencies over diverse action types. Specifically, we adopt
gating networks to enhance the generalization of GCN via the trainable adaptive
adjacency matrix obtained by blending the candidate spatial-temporal adjacency
matrices. Moreover, GAGCN addresses the cross-dependency of space and time by
balancing the weights of spatial-temporal modeling and fusing the decoupled
spatial-temporal features. Extensive experiments on Human 3.6M, AMASS, and 3DPW
demonstrate that GAGCN achieves state-of-the-art performance in both short-term
and long-term predictions. Our code will be released in the future.
Related papers
- Multi-Graph Convolution Network for Pose Forecasting [0.8057006406834467]
We propose a novel approach called the multi-graph convolution network (MGCN) for 3D human pose forecasting.
MGCN simultaneously captures spatial and temporal information by introducing an augmented graph for pose sequences.
In our evaluation, MGCN outperforms the state-of-the-art in pose prediction.
arXiv Detail & Related papers (2023-04-11T03:59:43Z) - Multivariate Time Series Forecasting with Dynamic Graph Neural ODEs [65.18780403244178]
We propose a continuous model to forecast Multivariate Time series with dynamic Graph neural Ordinary Differential Equations (MTGODE)
Specifically, we first abstract multivariate time series into dynamic graphs with time-evolving node features and unknown graph structures.
Then, we design and solve a neural ODE to complement missing graph topologies and unify both spatial and temporal message passing.
arXiv Detail & Related papers (2022-02-17T02:17:31Z) - Spatio-Temporal Joint Graph Convolutional Networks for Traffic
Forecasting [75.10017445699532]
Recent have shifted their focus towards formulating traffic forecasting as atemporal graph modeling problem.
We propose a novel approach for accurate traffic forecasting on road networks over multiple future time steps.
arXiv Detail & Related papers (2021-11-25T08:45:14Z) - Space-Time-Separable Graph Convolutional Network for Pose Forecasting [3.6417475195085602]
STS-GCN models the human pose dynamics only with a graph convolutional network (GCN)
The space-time graph connectivity is factored into space and time affinity, which bottlenecks the space-time cross-talk, while enabling full joint-joint and time-time correlations.
arXiv Detail & Related papers (2021-10-09T13:59:30Z) - Spatial-Temporal Graph ODE Networks for Traffic Flow Forecasting [22.421667339552467]
Spatial-temporal forecasting has attracted tremendous attention in a wide range of applications, and traffic flow prediction is a canonical and typical example.
Existing works typically utilize shallow graph convolution networks (GNNs) and temporal extracting modules to model spatial and temporal dependencies respectively.
We propose Spatial-Temporal Graph Ordinary Differential Equation Networks (STGODE), which captures spatial-temporal dynamics through a tensor-based ordinary differential equation (ODE)
We evaluate our model on multiple real-world traffic datasets and superior performance is achieved over state-of-the-art baselines.
arXiv Detail & Related papers (2021-06-24T11:48:45Z) - Spatial-Temporal Fusion Graph Neural Networks for Traffic Flow
Forecasting [35.072979313851235]
spatial-temporal data forecasting of traffic flow is a challenging task because of complicated spatial dependencies and dynamical trends of temporal pattern between different roads.
Existing frameworks typically utilize given spatial adjacency graph and sophisticated mechanisms for modeling spatial and temporal correlations.
This paper proposes Spatial-Temporal Fusion Graph Neural Networks (STFGNN) for traffic flow forecasting.
arXiv Detail & Related papers (2020-12-15T14:03:17Z) - On the spatial attention in Spatio-Temporal Graph Convolutional Networks
for skeleton-based human action recognition [97.14064057840089]
Graphal networks (GCNs) promising performance in skeleton-based human action recognition by modeling a sequence of skeletons as a graph.
Most of the recently proposed G-temporal-based methods improve the performance by learning the graph structure at each layer of the network.
arXiv Detail & Related papers (2020-11-07T19:03:04Z) - Disentangling and Unifying Graph Convolutions for Skeleton-Based Action
Recognition [79.33539539956186]
We propose a simple method to disentangle multi-scale graph convolutions and a unified spatial-temporal graph convolutional operator named G3D.
By coupling these proposals, we develop a powerful feature extractor named MS-G3D based on which our model outperforms previous state-of-the-art methods on three large-scale datasets.
arXiv Detail & Related papers (2020-03-31T11:28:25Z) - A Spatial-Temporal Attentive Network with Spatial Continuity for
Trajectory Prediction [74.00750936752418]
We propose a novel model named spatial-temporal attentive network with spatial continuity (STAN-SC)
First, spatial-temporal attention mechanism is presented to explore the most useful and important information.
Second, we conduct a joint feature sequence based on the sequence and instant state information to make the generative trajectories keep spatial continuity.
arXiv Detail & Related papers (2020-03-13T04:35:50Z) - Spatial-Temporal Transformer Networks for Traffic Flow Forecasting [74.76852538940746]
We propose a novel paradigm of Spatial-Temporal Transformer Networks (STTNs) to improve the accuracy of long-term traffic forecasting.
Specifically, we present a new variant of graph neural networks, named spatial transformer, by dynamically modeling directed spatial dependencies.
The proposed model enables fast and scalable training over a long range spatial-temporal dependencies.
arXiv Detail & Related papers (2020-01-09T10:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.