What-If Motion Prediction for Autonomous Driving
- URL: http://arxiv.org/abs/2008.10587v1
- Date: Mon, 24 Aug 2020 17:49:30 GMT
- Title: What-If Motion Prediction for Autonomous Driving
- Authors: Siddhesh Khandelwal, William Qi, Jagjeet Singh, Andrew Hartnett, Deva
Ramanan
- Abstract summary: Viable solutions must account for both the static geometric context, such as road lanes, and dynamic social interactions arising from multiple actors.
We propose a recurrent graph-based attentional approach with interpretable geometric (actor-lane) and social (actor-actor) relationships.
Our model can produce diverse predictions conditioned on hypothetical or "what-if" road lanes and multi-actor interactions.
- Score: 58.338520347197765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Forecasting the long-term future motion of road actors is a core challenge to
the deployment of safe autonomous vehicles (AVs). Viable solutions must account
for both the static geometric context, such as road lanes, and dynamic social
interactions arising from multiple actors. While recent deep architectures have
achieved state-of-the-art performance on distance-based forecasting metrics,
these approaches produce forecasts that are predicted without regard to the
AV's intended motion plan. In contrast, we propose a recurrent graph-based
attentional approach with interpretable geometric (actor-lane) and social
(actor-actor) relationships that supports the injection of counterfactual
geometric goals and social contexts. Our model can produce diverse predictions
conditioned on hypothetical or "what-if" road lanes and multi-actor
interactions. We show that such an approach could be used in the planning loop
to reason about unobserved causes or unlikely futures that are directly
relevant to the AV's intended route.
Related papers
- PPAD: Iterative Interactions of Prediction and Planning for End-to-end Autonomous Driving [57.89801036693292]
PPAD (Iterative Interaction of Prediction and Planning Autonomous Driving) considers the timestep-wise interaction to better integrate prediction and planning.
We design ego-to-agent, ego-to-map, and ego-to-BEV interaction mechanisms with hierarchical dynamic key objects attention to better model the interactions.
arXiv Detail & Related papers (2023-11-14T11:53:24Z) - LatentFormer: Multi-Agent Transformer-Based Interaction Modeling and
Trajectory Prediction [12.84508682310717]
We propose LatentFormer, a transformer-based model for predicting future vehicle trajectories.
We evaluate the proposed method on the nuScenes benchmark dataset and show that our approach achieves state-of-the-art performance and improves upon trajectory metrics by up to 40%.
arXiv Detail & Related papers (2022-03-03T17:44:58Z) - Scene Transformer: A unified multi-task model for behavior prediction
and planning [42.758178896204036]
We formulate a model for predicting the behavior of all agents jointly in real-world driving environments.
Inspired by recent language modeling approaches, we use a masking strategy as the query to our model.
We evaluate our approach on autonomous driving datasets for behavior prediction, and achieve state-of-the-art performance.
arXiv Detail & Related papers (2021-06-15T20:20:44Z) - End-to-end Contextual Perception and Prediction with Interaction
Transformer [79.14001602890417]
We tackle the problem of detecting objects in 3D and forecasting their future motion in the context of self-driving.
To capture their spatial-temporal dependencies, we propose a recurrent neural network with a novel Transformer architecture.
Our model can be trained end-to-end, and runs in real-time.
arXiv Detail & Related papers (2020-08-13T14:30:12Z) - Implicit Latent Variable Model for Scene-Consistent Motion Forecasting [78.74510891099395]
In this paper, we aim to learn scene-consistent motion forecasts of complex urban traffic directly from sensor data.
We model the scene as an interaction graph and employ powerful graph neural networks to learn a distributed latent representation of the scene.
arXiv Detail & Related papers (2020-07-23T14:31:25Z) - Physically constrained short-term vehicle trajectory forecasting with
naive semantic maps [6.85316573653194]
We propose a model that learns to extract relevant road features from semantic maps as well as general motion of agents.
We show that our model is not only capable of anticipating future motion whilst taking into consideration road boundaries, but can also effectively and precisely predict trajectories for a longer time horizon than initially trained for.
arXiv Detail & Related papers (2020-06-09T09:52:44Z) - The Importance of Prior Knowledge in Precise Multimodal Prediction [71.74884391209955]
Roads have well defined geometries, topologies, and traffic rules.
In this paper we propose to incorporate structured priors as a loss function.
We demonstrate the effectiveness of our approach on real-world self-driving datasets.
arXiv Detail & Related papers (2020-06-04T03:56:11Z) - PiP: Planning-informed Trajectory Prediction for Autonomous Driving [69.41885900996589]
We propose planning-informed trajectory prediction (PiP) to tackle the prediction problem in the multi-agent setting.
By informing the prediction process with the planning of ego vehicle, our method achieves the state-of-the-art performance of multi-agent forecasting on highway datasets.
arXiv Detail & Related papers (2020-03-25T16:09:54Z) - Social-WaGDAT: Interaction-aware Trajectory Prediction via Wasserstein
Graph Double-Attention Network [29.289670231364788]
In this paper, we propose a generic generative neural system for multi-agent trajectory prediction.
We also employ an efficient kinematic constraint layer applied to vehicle trajectory prediction.
The proposed system is evaluated on three public benchmark datasets for trajectory prediction.
arXiv Detail & Related papers (2020-02-14T20:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.