Perceive, Predict, and Plan: Safe Motion Planning Through Interpretable
Semantic Representations
- URL: http://arxiv.org/abs/2008.05930v1
- Date: Thu, 13 Aug 2020 14:40:46 GMT
- Title: Perceive, Predict, and Plan: Safe Motion Planning Through Interpretable
Semantic Representations
- Authors: Abbas Sadat, Sergio Casas, Mengye Ren, Xinyu Wu, Pranaab Dhawan,
Raquel Urtasun
- Abstract summary: We propose a novel end-to-end learnable network that performs joint perception, prediction and motion planning for self-driving vehicles.
Our network is learned end-to-end from human demonstrations.
- Score: 81.05412704590707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we propose a novel end-to-end learnable network that performs
joint perception, prediction and motion planning for self-driving vehicles and
produces interpretable intermediate representations. Unlike existing neural
motion planners, our motion planning costs are consistent with our perception
and prediction estimates. This is achieved by a novel differentiable semantic
occupancy representation that is explicitly used as cost by the motion planning
process. Our network is learned end-to-end from human demonstrations. The
experiments in a large-scale manual-driving dataset and closed-loop simulation
show that the proposed model significantly outperforms state-of-the-art
planners in imitating the human behaviors while producing much safer
trajectories.
Related papers
- Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - End-to-End Interactive Prediction and Planning with Optical Flow
Distillation for Autonomous Driving [16.340715765227475]
We propose an end-to-end interactive neural motion planner (INMP) for autonomous driving in this paper.
Our INMP first generates a feature map in bird's-eye-view space, which is then processed to detect other agents and perform interactive prediction and planning jointly.
Also, we adopt an optical flow distillation paradigm, which can effectively improve the network performance while still maintaining its real-time inference speed.
arXiv Detail & Related papers (2021-04-18T14:05:18Z) - Leveraging Neural Network Gradients within Trajectory Optimization for
Proactive Human-Robot Interactions [32.57882479132015]
We present a framework that fuses together the interpretability and flexibility of trajectory optimization (TO) with the predictive power of state-of-the-art human trajectory prediction models.
We demonstrate the efficacy of our approach in a multi-agent scenario whereby a robot is required to safely and efficiently navigate through a crowd of up to ten pedestrians.
arXiv Detail & Related papers (2020-12-02T08:43:36Z) - MATS: An Interpretable Trajectory Forecasting Representation for
Planning and Control [46.86174832000696]
Reasoning about human motion is a core component of modern human-robot interactive systems.
One of the main uses of behavior prediction in autonomous systems is to inform robot motion planning and control.
We propose a new output representation for trajectory forecasting that is more amenable to downstream planning and control use.
arXiv Detail & Related papers (2020-09-16T07:32:37Z) - DSDNet: Deep Structured self-Driving Network [92.9456652486422]
We propose the Deep Structured self-Driving Network (DSDNet), which performs object detection, motion prediction, and motion planning with a single neural network.
We develop a deep structured energy based model which considers the interactions between actors and produces socially consistent multimodal future predictions.
arXiv Detail & Related papers (2020-08-13T17:54:06Z) - Implicit Latent Variable Model for Scene-Consistent Motion Forecasting [78.74510891099395]
In this paper, we aim to learn scene-consistent motion forecasts of complex urban traffic directly from sensor data.
We model the scene as an interaction graph and employ powerful graph neural networks to learn a distributed latent representation of the scene.
arXiv Detail & Related papers (2020-07-23T14:31:25Z) - The Importance of Prior Knowledge in Precise Multimodal Prediction [71.74884391209955]
Roads have well defined geometries, topologies, and traffic rules.
In this paper we propose to incorporate structured priors as a loss function.
We demonstrate the effectiveness of our approach on real-world self-driving datasets.
arXiv Detail & Related papers (2020-06-04T03:56:11Z) - PiP: Planning-informed Trajectory Prediction for Autonomous Driving [69.41885900996589]
We propose planning-informed trajectory prediction (PiP) to tackle the prediction problem in the multi-agent setting.
By informing the prediction process with the planning of ego vehicle, our method achieves the state-of-the-art performance of multi-agent forecasting on highway datasets.
arXiv Detail & Related papers (2020-03-25T16:09:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.