TPCN: Temporal Point Cloud Networks for Motion Forecasting
- URL: http://arxiv.org/abs/2103.03067v1
- Date: Thu, 4 Mar 2021 14:44:32 GMT
- Title: TPCN: Temporal Point Cloud Networks for Motion Forecasting
- Authors: Maosheng Ye, Tongyi Cao, Qifeng Chen
- Abstract summary: We propose a novel framework with joint spatial and temporal learning for trajectory prediction.
In the spatial dimension, agents can be viewed as an unordered point set, and thus it is straightforward to apply point cloud learning techniques to model agents' locations.
Experiments on the Argoverse motion forecasting benchmark show that our approach achieves the state-of-the-art results.
- Score: 47.829152433166016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose the Temporal Point Cloud Networks (TPCN), a novel and flexible
framework with joint spatial and temporal learning for trajectory prediction.
Unlike existing approaches that rasterize agents and map information as 2D
images or operate in a graph representation, our approach extends ideas from
point cloud learning with dynamic temporal learning to capture both spatial and
temporal information by splitting trajectory prediction into both spatial and
temporal dimensions. In the spatial dimension, agents can be viewed as an
unordered point set, and thus it is straightforward to apply point cloud
learning techniques to model agents' locations. While the spatial dimension
does not take kinematic and motion information into account, we further propose
dynamic temporal learning to model agents' motion over time. Experiments on the
Argoverse motion forecasting benchmark show that our approach achieves the
state-of-the-art results.
Related papers
- Spatio-Temporal Branching for Motion Prediction using Motion Increments [55.68088298632865]
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications.
Traditional methods rely on hand-crafted features and machine learning techniques.
We propose a noveltemporal-temporal branching network using incremental information for HMP.
arXiv Detail & Related papers (2023-08-02T12:04:28Z) - Spatio-temporal Tendency Reasoning for Human Body Pose and Shape
Estimation from Videos [10.50306784245168]
We present atemporal tendency reasoning (STR) network for recovering human body pose shape from videos.
Our STR aims to learn accurate and spatial motion sequences in an unconstrained environment.
Our STR remains competitive with the state-of-the-art on three datasets.
arXiv Detail & Related papers (2022-10-07T16:09:07Z) - PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences [51.53563462897779]
We propose a point-ordered (PST) convolution to achieve informative representations of point cloud sequences.
PST first disentangles space and time in point cloud sequences, then a spatial convolution is employed to capture local structure points in the 3D space, and a temporal convolution is used to model the dynamics of the spatial regions along the time dimension.
We incorporate the proposed PST convolution into a deep network, namely PSTNet, to extract features of point cloud sequences in a hierarchical manner.
arXiv Detail & Related papers (2022-05-27T02:14:43Z) - STONet: A Neural-Operator-Driven Spatio-temporal Network [38.5696882090282]
Graph-based graph-temporal neural networks are effective to model spatial dependency among discrete points sampled irregularly.
We propose atemporal framework based on neural operators for PDEs, which learn the mechanisms governing the dynamics of spatially-continuous physical quantities.
Experiments show our model's performance on forecasting spatially-continuous physic quantities, and its superior to unseen spatial points and ability to handle temporally-irregular data.
arXiv Detail & Related papers (2022-04-18T17:20:12Z) - CaSPR: Learning Canonical Spatiotemporal Point Cloud Representations [72.4716073597902]
We propose a method to learn object Canonical Point Cloud Representations of dynamically or moving objects.
We demonstrate the effectiveness of our method on several applications including shape reconstruction, camera pose estimation, continuoustemporal sequence reconstruction, and correspondence estimation.
arXiv Detail & Related papers (2020-08-06T17:58:48Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z) - Unsupervised Learning of Global Registration of Temporal Sequence of
Point Clouds [16.019588704177288]
Global registration of point clouds aims to find an optimal alignment of a sequence of 2D or 3D point sets.
We present a novel method that takes advantage of current deep learning techniques for unsupervised learning of global registration from a temporal sequence of point clouds.
arXiv Detail & Related papers (2020-06-17T06:00:36Z) - A Spatial-Temporal Attentive Network with Spatial Continuity for
Trajectory Prediction [74.00750936752418]
We propose a novel model named spatial-temporal attentive network with spatial continuity (STAN-SC)
First, spatial-temporal attention mechanism is presented to explore the most useful and important information.
Second, we conduct a joint feature sequence based on the sequence and instant state information to make the generative trajectories keep spatial continuity.
arXiv Detail & Related papers (2020-03-13T04:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.