LaneRCNN: Distributed Representations for Graph-Centric Motion
Forecasting
- URL: http://arxiv.org/abs/2101.06653v1
- Date: Sun, 17 Jan 2021 11:54:49 GMT
- Title: LaneRCNN: Distributed Representations for Graph-Centric Motion
Forecasting
- Authors: Wenyuan Zeng, Ming Liang, Renjie Liao, Raquel Urtasun
- Abstract summary: LaneRCNN is a graph-centric motion forecasting model.
We learn a local lane graph representation per actor to encode its past motions and the local map topology.
We parameterize the output trajectories based on lane graphs, a more amenable prediction parameterization.
- Score: 104.8466438967385
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Forecasting the future behaviors of dynamic actors is an important task in
many robotics applications such as self-driving. It is extremely challenging as
actors have latent intentions and their trajectories are governed by complex
interactions between the other actors, themselves, and the maps. In this paper,
we propose LaneRCNN, a graph-centric motion forecasting model. Importantly,
relying on a specially designed graph encoder, we learn a local lane graph
representation per actor (LaneRoI) to encode its past motions and the local map
topology. We further develop an interaction module which permits efficient
message passing among local graph representations within a shared global lane
graph. Moreover, we parameterize the output trajectories based on lane graphs,
a more amenable prediction parameterization. Our LaneRCNN captures the
actor-to-actor and the actor-to-map relations in a distributed and map-aware
manner. We demonstrate the effectiveness of our approach on the large-scale
Argoverse Motion Forecasting Benchmark. We achieve the 1st place on the
leaderboard and significantly outperform previous best results.
Related papers
- GoRela: Go Relative for Viewpoint-Invariant Motion Forecasting [121.42898228997538]
We propose an efficient shared encoding for all agents and the map without sacrificing accuracy or generalization.
We leverage pair-wise relative positional encodings to represent geometric relationships between the agents and the map elements in a heterogeneous spatial graph.
Our decoder is also viewpoint agnostic, predicting agent goals on the lane graph to enable diverse and context-aware multimodal prediction.
arXiv Detail & Related papers (2022-11-04T16:10:50Z) - Hierarchical Motion Encoder-Decoder Network for Trajectory Forecasting [2.3852339280654173]
Trajectory forecasting plays a pivotal role in the field of intelligent vehicles or social robots.
Recent works focus on modeling spatial social impacts or temporal motion attentions, but neglect inherent properties of motions.
This paper proposes a context-free Hierarchical Motion-Decoder Network (HMNet) for vehicle trajectory prediction.
arXiv Detail & Related papers (2021-11-26T06:12:19Z) - Decoder Fusion RNN: Context and Interaction Aware Decoders for
Trajectory Prediction [53.473846742702854]
We propose a recurrent, attention-based approach for motion forecasting.
Decoder Fusion RNN (DF-RNN) is composed of a recurrent behavior encoder, an inter-agent multi-headed attention module, and a context-aware decoder.
We demonstrate the efficacy of our method by testing it on the Argoverse motion forecasting dataset and show its state-of-the-art performance on the public benchmark.
arXiv Detail & Related papers (2021-08-12T15:53:37Z) - SGCN:Sparse Graph Convolution Network for Pedestrian Trajectory
Prediction [64.16212996247943]
We present a Sparse Graph Convolution Network(SGCN) for pedestrian trajectory prediction.
Specifically, the SGCN explicitly models the sparse directed interaction with a sparse directed spatial graph to capture adaptive interaction pedestrians.
visualizations indicate that our method can capture adaptive interactions between pedestrians and their effective motion tendencies.
arXiv Detail & Related papers (2021-04-04T03:17:42Z) - Interaction-Based Trajectory Prediction Over a Hybrid Traffic Graph [4.574413934477815]
We propose to use a hybrid graph whose nodes represent both the traffic actors as well as the static and dynamic traffic elements present in the scene.
The different modes of temporal interaction (e.g., stopping and going) among actors and traffic elements are explicitly modeled by graph edges.
We show that our proposed model, TrafficGraphNet, achieves state-of-the-art trajectory prediction accuracy while maintaining a high level of interpretability.
arXiv Detail & Related papers (2020-09-27T18:20:03Z) - Learning Lane Graph Representations for Motion Forecasting [92.88572392790623]
We construct a lane graph from raw map data to preserve the map structure.
We exploit a fusion network consisting of four types of interactions, actor-to-lane, lane-to-lane, lane-to-actor and actor-to-actor.
Our approach significantly outperforms the state-of-the-art on the large scale Argoverse motion forecasting benchmark.
arXiv Detail & Related papers (2020-07-27T17:59:49Z) - VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized
Representation [74.56282712099274]
This paper introduces VectorNet, a hierarchical graph neural network that exploits the spatial locality of individual road components represented by vectors.
By operating on the vectorized high definition (HD) maps and agent trajectories, we avoid lossy rendering and computationally intensive ConvNet encoding steps.
We evaluate VectorNet on our in-house behavior prediction benchmark and the recently released Argoverse forecasting dataset.
arXiv Detail & Related papers (2020-05-08T19:07:03Z) - CoMoGCN: Coherent Motion Aware Trajectory Prediction with Graph
Representation [12.580809204729583]
We propose a novel framework, coherent motion aware graph convolutional network (CoMoGCN), for trajectory prediction in crowded scenes with group constraints.
Our method achieves state-of-the-art performance on several different trajectory prediction benchmarks, and the best average performance among all benchmarks considered.
arXiv Detail & Related papers (2020-05-02T09:10:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.