Learning Lane Graph Representations for Motion Forecasting
- URL: http://arxiv.org/abs/2007.13732v1
- Date: Mon, 27 Jul 2020 17:59:49 GMT
- Title: Learning Lane Graph Representations for Motion Forecasting
- Authors: Ming Liang, Bin Yang, Rui Hu, Yun Chen, Renjie Liao, Song Feng, Raquel
Urtasun
- Abstract summary: We construct a lane graph from raw map data to preserve the map structure.
We exploit a fusion network consisting of four types of interactions, actor-to-lane, lane-to-lane, lane-to-actor and actor-to-actor.
Our approach significantly outperforms the state-of-the-art on the large scale Argoverse motion forecasting benchmark.
- Score: 92.88572392790623
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose a motion forecasting model that exploits a novel structured map
representation as well as actor-map interactions. Instead of encoding
vectorized maps as raster images, we construct a lane graph from raw map data
to explicitly preserve the map structure. To capture the complex topology and
long range dependencies of the lane graph, we propose LaneGCN which extends
graph convolutions with multiple adjacency matrices and along-lane dilation. To
capture the complex interactions between actors and maps, we exploit a fusion
network consisting of four types of interactions, actor-to-lane, lane-to-lane,
lane-to-actor and actor-to-actor. Powered by LaneGCN and actor-map
interactions, our model is able to predict accurate and realistic multi-modal
trajectories. Our approach significantly outperforms the state-of-the-art on
the large scale Argoverse motion forecasting benchmark.
Related papers
- ProIn: Learning to Predict Trajectory Based on Progressive Interactions for Autonomous Driving [11.887346755144485]
A progressive interaction network is proposed to enable the agent's feature to progressively focus on relevant maps.
The network progressively encodes the complex influence of map constraints into the agent's feature through graph convolutions.
Experiments have validated the superiority of progressive interactions to the existing one-stage interaction.
arXiv Detail & Related papers (2024-03-25T02:38:34Z) - Heterogeneous Graph-based Trajectory Prediction using Local Map Context
and Social Interactions [47.091620047301305]
We present a novel approach for vector-based trajectory prediction that addresses shortcomings by leveraging three crucial sources of information.
First, we model interactions between traffic agents by a semantic scene graph, that accounts for the nature and important features of their relation.
Second, we extract agent-centric image-based map features to model the local map context.
arXiv Detail & Related papers (2023-11-30T13:46:05Z) - GoRela: Go Relative for Viewpoint-Invariant Motion Forecasting [121.42898228997538]
We propose an efficient shared encoding for all agents and the map without sacrificing accuracy or generalization.
We leverage pair-wise relative positional encodings to represent geometric relationships between the agents and the map elements in a heterogeneous spatial graph.
Our decoder is also viewpoint agnostic, predicting agent goals on the lane graph to enable diverse and context-aware multimodal prediction.
arXiv Detail & Related papers (2022-11-04T16:10:50Z) - Path-Aware Graph Attention for HD Maps in Motion Prediction [4.531240717484252]
Success of motion prediction for autonomous driving relies on integration of information from the HD maps.
We propose Path-Aware Graph Attention, a novel attention architecture that infers the attention between two vertices by parsing the sequence of edges forming the paths that connect them.
Our analysis illustrates how the proposed attention mechanism can facilitate learning in a didactic problem where existing graph networks like GCN struggle.
arXiv Detail & Related papers (2022-02-23T09:43:47Z) - Decoder Fusion RNN: Context and Interaction Aware Decoders for
Trajectory Prediction [53.473846742702854]
We propose a recurrent, attention-based approach for motion forecasting.
Decoder Fusion RNN (DF-RNN) is composed of a recurrent behavior encoder, an inter-agent multi-headed attention module, and a context-aware decoder.
We demonstrate the efficacy of our method by testing it on the Argoverse motion forecasting dataset and show its state-of-the-art performance on the public benchmark.
arXiv Detail & Related papers (2021-08-12T15:53:37Z) - LaneRCNN: Distributed Representations for Graph-Centric Motion
Forecasting [104.8466438967385]
LaneRCNN is a graph-centric motion forecasting model.
We learn a local lane graph representation per actor to encode its past motions and the local map topology.
We parameterize the output trajectories based on lane graphs, a more amenable prediction parameterization.
arXiv Detail & Related papers (2021-01-17T11:54:49Z) - Interaction-Based Trajectory Prediction Over a Hybrid Traffic Graph [4.574413934477815]
We propose to use a hybrid graph whose nodes represent both the traffic actors as well as the static and dynamic traffic elements present in the scene.
The different modes of temporal interaction (e.g., stopping and going) among actors and traffic elements are explicitly modeled by graph edges.
We show that our proposed model, TrafficGraphNet, achieves state-of-the-art trajectory prediction accuracy while maintaining a high level of interpretability.
arXiv Detail & Related papers (2020-09-27T18:20:03Z) - VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized
Representation [74.56282712099274]
This paper introduces VectorNet, a hierarchical graph neural network that exploits the spatial locality of individual road components represented by vectors.
By operating on the vectorized high definition (HD) maps and agent trajectories, we avoid lossy rendering and computationally intensive ConvNet encoding steps.
We evaluate VectorNet on our in-house behavior prediction benchmark and the recently released Argoverse forecasting dataset.
arXiv Detail & Related papers (2020-05-08T19:07:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.