SSP: Single Shot Future Trajectory Prediction
- URL: http://arxiv.org/abs/2004.05846v2
- Date: Mon, 9 Nov 2020 01:37:26 GMT
- Title: SSP: Single Shot Future Trajectory Prediction
- Authors: Isht Dwivedi, Srikanth Malla, Behzad Dariush, Chiho Choi
- Abstract summary: We propose a robust solution to future trajectory forecast, which can be practically applicable to autonomous agents in highly crowded environments.
First, we use composite fields to predict future locations of all road agents in a single-shot, which results in a constant time.
Second, interactions between agents are modeled as non-local, response enabling spatial relationships between different locations to be captured temporally.
Third, the semantic context of the scene are modeled and take into account the environmental constraints that potentially influence the future motion.
- Score: 26.18589883075203
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose a robust solution to future trajectory forecast, which can be
practically applicable to autonomous agents in highly crowded environments. For
this, three aspects are particularly addressed in this paper. First, we use
composite fields to predict future locations of all road agents in a
single-shot, which results in a constant time complexity, regardless of the
number of agents in the scene. Second, interactions between agents are modeled
as a non-local response, enabling spatial relationships between different
locations to be captured temporally as well (i.e., in spatio-temporal
interactions). Third, the semantic context of the scene are modeled and take
into account the environmental constraints that potentially influence the
future motion. To this end, we validate the robustness of the proposed approach
using the ETH, UCY, and SDD datasets and highlight its practical functionality
compared to the current state-of-the-art methods.
Related papers
- AMP: Autoregressive Motion Prediction Revisited with Next Token Prediction for Autonomous Driving [59.94343412438211]
We introduce the GPT style next token motion prediction into motion prediction.
Different from language data which is composed of homogeneous units -words, the elements in the driving scene could have complex spatial-temporal and semantic relations.
We propose to adopt three factorized attention modules with different neighbors for information aggregation and different position encoding styles to capture their relations.
arXiv Detail & Related papers (2024-03-20T06:22:37Z) - Predicting Future Occupancy Grids in Dynamic Environment with
Spatio-Temporal Learning [63.25627328308978]
We propose a-temporal prediction network pipeline to generate future occupancy predictions.
Compared to current SOTA, our approach predicts occupancy for a longer horizon of 3 seconds.
We publicly release our grid occupancy dataset based on nulis to support further research.
arXiv Detail & Related papers (2022-05-06T13:45:32Z) - Dynamic Relation Discovery and Utilization in Multi-Entity Time Series
Forecasting [92.32415130188046]
In many real-world scenarios, there could exist crucial yet implicit relation between entities.
We propose an attentional multi-graph neural network with automatic graph learning (A2GNN) in this work.
arXiv Detail & Related papers (2022-02-18T11:37:04Z) - MUSE-VAE: Multi-Scale VAE for Environment-Aware Long Term Trajectory
Prediction [28.438787700968703]
Conditional MUSE offers diverse and simultaneously more accurate predictions compared to the current state-of-the-art.
We demonstrate these assertions through a comprehensive set of experiments on nuScenes and SDD benchmarks as well as PFSD, a new synthetic dataset.
arXiv Detail & Related papers (2022-01-18T18:40:03Z) - Exploring Dynamic Context for Multi-path Trajectory Prediction [33.66335553588001]
We propose a novel framework, named Dynamic Context Network (DCENet)
In our framework, the spatial context between agents is explored by using self-attention architectures.
A set of future trajectories for each agent is predicted conditioned on the learned spatial-temporal context.
arXiv Detail & Related papers (2020-10-30T13:39:20Z) - End-to-end Contextual Perception and Prediction with Interaction
Transformer [79.14001602890417]
We tackle the problem of detecting objects in 3D and forecasting their future motion in the context of self-driving.
To capture their spatial-temporal dependencies, we propose a recurrent neural network with a novel Transformer architecture.
Our model can be trained end-to-end, and runs in real-time.
arXiv Detail & Related papers (2020-08-13T14:30:12Z) - SMART: Simultaneous Multi-Agent Recurrent Trajectory Prediction [72.37440317774556]
We propose advances that address two key challenges in future trajectory prediction.
multimodality in both training data and predictions and constant time inference regardless of number of agents.
arXiv Detail & Related papers (2020-07-26T08:17:10Z) - Robust Trajectory Forecasting for Multiple Intelligent Agents in Dynamic
Scene [11.91073327154494]
We present a novel method for robust trajectory forecasting of multiple agents in dynamic scenes.
The proposed method outperforms the state-of-the-art prediction methods in terms of prediction accuracy.
arXiv Detail & Related papers (2020-05-27T02:32:55Z) - UST: Unifying Spatio-Temporal Context for Trajectory Prediction in
Autonomous Driving [20.017491739890588]
We propose a unified approach to treat time and space dimensions equally for modeling-temporal context.
We show that the proposed method substantially outperforms the previous state-of-the-art methods while maintaining its simplicity.
arXiv Detail & Related papers (2020-05-06T13:02:57Z) - A Spatial-Temporal Attentive Network with Spatial Continuity for
Trajectory Prediction [74.00750936752418]
We propose a novel model named spatial-temporal attentive network with spatial continuity (STAN-SC)
First, spatial-temporal attention mechanism is presented to explore the most useful and important information.
Second, we conduct a joint feature sequence based on the sequence and instant state information to make the generative trajectories keep spatial continuity.
arXiv Detail & Related papers (2020-03-13T04:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.