Stepwise Goal-Driven Networks for Trajectory Prediction
- URL: http://arxiv.org/abs/2103.14107v1
- Date: Thu, 25 Mar 2021 19:51:54 GMT
- Title: Stepwise Goal-Driven Networks for Trajectory Prediction
- Authors: Chuhua Wang, Yuchen Wang, Mingze Xu, David J. Crandall
- Abstract summary: We propose to predict the future trajectories of observed agents by estimating and using their goals at multiple time scales.
We present a novel recurrent network for trajectory prediction, called Stepwise Goal-Driven Network (SGNet)
In particular, the framework incorporates an encoder module that captures historical information, a stepwise goal estimator that predicts successive goals into the future, and a decoder module that predicts future trajectory.
- Score: 24.129731432223416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose to predict the future trajectories of observed agents (e.g.,
pedestrians or vehicles) by estimating and using their goals at multiple time
scales. We argue that the goal of a moving agent may change over time, and
modeling goals continuously provides more accurate and detailed information for
future trajectory estimation. In this paper, we present a novel recurrent
network for trajectory prediction, called Stepwise Goal-Driven Network (SGNet).
Unlike prior work that models only a single, long-term goal, SGNet estimates
and uses goals at multiple temporal scales. In particular, the framework
incorporates an encoder module that captures historical information, a stepwise
goal estimator that predicts successive goals into the future, and a decoder
module that predicts future trajectory. We evaluate our model on three
first-person traffic datasets (HEV-I, JAAD, and PIE) as well as on two bird's
eye view datasets (ETH and UCY), and show that our model outperforms the
state-of-the-art methods in terms of both average and final displacement errors
on all datasets. Code has been made available at:
https://github.com/ChuhuaW/SGNet.pytorch.
Related papers
- A Multi-Stage Goal-Driven Network for Pedestrian Trajectory Prediction [6.137256382926171]
This paper proposes a novel method for pedestrian trajectory prediction, called multi-stage goal-driven network (MGNet)
The network comprises three main components: a conditional variational autoencoder (CVAE), an attention module, and a multi-stage goal evaluator.
The effectiveness of MGNet is demonstrated through comprehensive experiments on the JAAD and PIE datasets.
arXiv Detail & Related papers (2024-06-26T03:59:21Z) - HPNet: Dynamic Trajectory Forecasting with Historical Prediction Attention [76.37139809114274]
HPNet is a novel dynamic trajectory forecasting method.
We propose a Historical Prediction Attention module to automatically encode the dynamic relationship between successive predictions.
Our code is available at https://github.com/XiaolongTang23/HPNet.
arXiv Detail & Related papers (2024-04-09T14:42:31Z) - OFMPNet: Deep End-to-End Model for Occupancy and Flow Prediction in Urban Environment [0.0]
We introduce an end-to-end neural network methodology designed to predict the future behaviors of all dynamic objects in the environment.
We propose a novel time-weighted motion flow loss, whose application has shown a substantial decrease in end-point error.
arXiv Detail & Related papers (2024-04-02T19:37:58Z) - Interpretable Long Term Waypoint-Based Trajectory Prediction Model [1.4778851751964937]
We study the impact of adding a long-term goal on the performance of a trajectory prediction framework.
We present an interpretable long term waypoint-driven prediction framework (WayDCM)
arXiv Detail & Related papers (2023-12-11T09:10:22Z) - JRDB-Traj: A Dataset and Benchmark for Trajectory Forecasting in Crowds [79.00975648564483]
Trajectory forecasting models, employed in fields such as robotics, autonomous vehicles, and navigation, face challenges in real-world scenarios.
This dataset provides comprehensive data, including the locations of all agents, scene images, and point clouds, all from the robot's perspective.
The objective is to predict the future positions of agents relative to the robot using raw sensory input data.
arXiv Detail & Related papers (2023-11-05T18:59:31Z) - Transforming Model Prediction for Tracking [109.08417327309937]
Transformers capture global relations with little inductive bias, allowing it to learn the prediction of more powerful target models.
We train the proposed tracker end-to-end and validate its performance by conducting comprehensive experiments on multiple tracking datasets.
Our tracker sets a new state of the art on three benchmarks, achieving an AUC of 68.5% on the challenging LaSOT dataset.
arXiv Detail & Related papers (2022-03-21T17:59:40Z) - LTN: Long-Term Network for Long-Term Motion Prediction [0.0]
We present a two-stage framework for long-term trajectory prediction, which is named as Long-Term Network (LTN)
We first generate a set of proposed trajectories with our proposed distribution using a Conditional Variational Autoencoder (CVAE) and then classify them with binary labels, and output the trajectories with the highest score.
The results show that our method outperforms multiple state-of-the-art approaches in long-term trajectory prediction in terms of accuracy.
arXiv Detail & Related papers (2020-10-15T17:59:09Z) - Long-Horizon Visual Planning with Goal-Conditioned Hierarchical
Predictors [124.30562402952319]
The ability to predict and plan into the future is fundamental for agents acting in the world.
Current learning approaches for visual prediction and planning fail on long-horizon tasks.
We propose a framework for visual prediction and planning that is able to overcome both of these limitations.
arXiv Detail & Related papers (2020-06-23T17:58:56Z) - PnPNet: End-to-End Perception and Prediction with Tracking in the Loop [82.97006521937101]
We tackle the problem of joint perception and motion forecasting in the context of self-driving vehicles.
We propose Net, an end-to-end model that takes as input sensor data, and outputs at each time step object tracks and their future level.
arXiv Detail & Related papers (2020-05-29T17:57:25Z) - STINet: Spatio-Temporal-Interactive Network for Pedestrian Detection and
Trajectory Prediction [24.855059537779294]
We present a novel end-to-end two-stage network: Spatio--Interactive Network (STINet)
In addition to 3D geometry of pedestrians, we model temporal information for each of the pedestrians.
Our method predicts both current and past locations in the first stage, so that each pedestrian can be linked across frames.
arXiv Detail & Related papers (2020-05-08T18:43:01Z) - Spatiotemporal Relationship Reasoning for Pedestrian Intent Prediction [57.56466850377598]
Reasoning over visual data is a desirable capability for robotics and vision-based applications.
In this paper, we present a framework on graph to uncover relationships in different objects in the scene for reasoning about pedestrian intent.
Pedestrian intent, defined as the future action of crossing or not-crossing the street, is a very crucial piece of information for autonomous vehicles.
arXiv Detail & Related papers (2020-02-20T18:50:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.