A Spatial-Temporal Attentive Network with Spatial Continuity for
Trajectory Prediction
- URL: http://arxiv.org/abs/2003.06107v3
- Date: Thu, 14 Oct 2021 11:56:46 GMT
- Title: A Spatial-Temporal Attentive Network with Spatial Continuity for
Trajectory Prediction
- Authors: Beihao Xia, Conghao Wang, Qinmu Peng, Xinge You and Dacheng Tao
- Abstract summary: We propose a novel model named spatial-temporal attentive network with spatial continuity (STAN-SC)
First, spatial-temporal attention mechanism is presented to explore the most useful and important information.
Second, we conduct a joint feature sequence based on the sequence and instant state information to make the generative trajectories keep spatial continuity.
- Score: 74.00750936752418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It remains challenging to automatically predict the multi-agent trajectory
due to multiple interactions including agent to agent interaction and scene to
agent interaction. Although recent methods have achieved promising performance,
most of them just consider spatial influence of the interactions and ignore the
fact that temporal influence always accompanies spatial influence. Moreover,
those methods based on scene information always require extra segmented scene
images to generate multiple socially acceptable trajectories. To solve these
limitations, we propose a novel model named spatial-temporal attentive network
with spatial continuity (STAN-SC). First, spatial-temporal attention mechanism
is presented to explore the most useful and important information. Second, we
conduct a joint feature sequence based on the sequence and instant state
information to make the generative trajectories keep spatial continuity.
Experiments are performed on the two widely used ETH-UCY datasets and
demonstrate that the proposed model achieves state-of-the-art prediction
accuracy and handles more complex scenarios.
Related papers
- Multimodal joint prediction of traffic spatial-temporal data with graph sparse attention mechanism and bidirectional temporal convolutional network [25.524351892847257]
We propose a method called Graph Sparse Attention Mechanism with Bidirectional Temporal Convolutional Network (GSABT) for multimodal traffic spatial-temporal joint prediction.
We use a multimodal graph multiplied by self-attention weights to capture spatial local features, and then employ the Top-U sparse attention mechanism to obtain spatial global features.
We have designed a multimodal joint prediction framework that can be flexibly extended to both spatial and temporal dimensions.
arXiv Detail & Related papers (2024-12-24T12:57:52Z) - SFANet: Spatial-Frequency Attention Network for Weather Forecasting [54.470205739015434]
Weather forecasting plays a critical role in various sectors, driving decision-making and risk management.
Traditional methods often struggle to capture the complex dynamics of meteorological systems.
We propose a novel framework designed to address these challenges and enhance the accuracy of weather prediction.
arXiv Detail & Related papers (2024-05-29T08:00:15Z) - Triplet Attention Transformer for Spatiotemporal Predictive Learning [9.059462850026216]
We propose an innovative triplet attention transformer designed to capture both inter-frame dynamics and intra-frame static features.
The model incorporates the Triplet Attention Module (TAM), which replaces traditional recurrent units by exploring self-attention mechanisms in temporal, spatial, and channel dimensions.
arXiv Detail & Related papers (2023-10-28T12:49:33Z) - Spatial-Temporal Knowledge-Embedded Transformer for Video Scene Graph
Generation [64.85974098314344]
Video scene graph generation (VidSGG) aims to identify objects in visual scenes and infer their relationships for a given video.
Inherently, object pairs and their relationships enjoy spatial co-occurrence correlations within each image and temporal consistency/transition correlations across different images.
We propose a spatial-temporal knowledge-embedded transformer (STKET) that incorporates the prior spatial-temporal knowledge into the multi-head cross-attention mechanism.
arXiv Detail & Related papers (2023-09-23T02:40:28Z) - Spatio-Temporal Branching for Motion Prediction using Motion Increments [55.68088298632865]
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications.
Traditional methods rely on hand-crafted features and machine learning techniques.
We propose a noveltemporal-temporal branching network using incremental information for HMP.
arXiv Detail & Related papers (2023-08-02T12:04:28Z) - Learning Self-Modulating Attention in Continuous Time Space with
Applications to Sequential Recommendation [102.24108167002252]
We propose a novel attention network, named self-modulating attention, that models the complex and non-linearly evolving dynamic user preferences.
We empirically demonstrate the effectiveness of our method on top-N sequential recommendation tasks, and the results on three large-scale real-world datasets show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-30T03:54:11Z) - Spatial-Temporal Correlation and Topology Learning for Person
Re-Identification in Videos [78.45050529204701]
We propose a novel framework to pursue discriminative and robust representation by modeling cross-scale spatial-temporal correlation.
CTL utilizes a CNN backbone and a key-points estimator to extract semantic local features from human body.
It explores a context-reinforced topology to construct multi-scale graphs by considering both global contextual information and physical connections of human body.
arXiv Detail & Related papers (2021-04-15T14:32:12Z) - Exploring Dynamic Context for Multi-path Trajectory Prediction [33.66335553588001]
We propose a novel framework, named Dynamic Context Network (DCENet)
In our framework, the spatial context between agents is explored by using self-attention architectures.
A set of future trajectories for each agent is predicted conditioned on the learned spatial-temporal context.
arXiv Detail & Related papers (2020-10-30T13:39:20Z) - Robust Trajectory Forecasting for Multiple Intelligent Agents in Dynamic
Scene [11.91073327154494]
We present a novel method for robust trajectory forecasting of multiple agents in dynamic scenes.
The proposed method outperforms the state-of-the-art prediction methods in terms of prediction accuracy.
arXiv Detail & Related papers (2020-05-27T02:32:55Z) - SSP: Single Shot Future Trajectory Prediction [26.18589883075203]
We propose a robust solution to future trajectory forecast, which can be practically applicable to autonomous agents in highly crowded environments.
First, we use composite fields to predict future locations of all road agents in a single-shot, which results in a constant time.
Second, interactions between agents are modeled as non-local, response enabling spatial relationships between different locations to be captured temporally.
Third, the semantic context of the scene are modeled and take into account the environmental constraints that potentially influence the future motion.
arXiv Detail & Related papers (2020-04-13T09:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.