PLOP: Probabilistic poLynomial Objects trajectory Planning for
autonomous driving
- URL: http://arxiv.org/abs/2003.08744v3
- Date: Thu, 22 Oct 2020 08:29:19 GMT
- Title: PLOP: Probabilistic poLynomial Objects trajectory Planning for
autonomous driving
- Authors: Thibault Buhet, and Emilie Wirbel and Andrei Bursuc and Xavier
Perrotton
- Abstract summary: We use a conditional imitation learning algorithm to predict trajectories for ego vehicle and its neighbors.
Our approach is computationally efficient and relies only on on-board sensors.
We evaluate our method offline on the publicly available dataset nuScenes.
- Score: 8.105493956485583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To navigate safely in urban environments, an autonomous vehicle (ego vehicle)
must understand and anticipate its surroundings, in particular the behavior and
intents of other road users (neighbors). Most of the times, multiple decision
choices are acceptable for all road users (e.g., turn right or left, or
different ways of avoiding an obstacle), leading to a highly uncertain and
multi-modal decision space. We focus here on predicting multiple feasible
future trajectories for both ego vehicle and neighbors through a probabilistic
framework. We rely on a conditional imitation learning algorithm, conditioned
by a navigation command for the ego vehicle (e.g., "turn right"). Our model
processes ego vehicle front-facing camera images and bird-eye view grid,
computed from Lidar point clouds, with detections of past and present objects,
in order to generate multiple trajectories for both ego vehicle and its
neighbors. Our approach is computationally efficient and relies only on
on-board sensors. We evaluate our method offline on the publicly available
dataset nuScenes, achieving state-of-the-art performance, investigate the
impact of our architecture choices on online simulated experiments and show
preliminary insights for real vehicle control
Related papers
- BEVSeg2TP: Surround View Camera Bird's-Eye-View Based Joint Vehicle
Segmentation and Ego Vehicle Trajectory Prediction [4.328789276903559]
Trajectory prediction is a key task for vehicle autonomy.
There is a growing interest in learning-based trajectory prediction.
We show that there is the potential to improve the performance of perception.
arXiv Detail & Related papers (2023-12-20T15:02:37Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - Decision Making for Autonomous Driving in Interactive Merge Scenarios
via Learning-based Prediction [39.48631437946568]
This paper focuses on the complex task of merging into moving traffic where uncertainty emanates from the behavior of other drivers.
We frame the problem as a partially observable Markov decision process (POMDP) and solve it online with Monte Carlo tree search.
The solution to the POMDP is a policy that performs high-level driving maneuvers, such as giving way to an approaching car, keeping a safe distance from the vehicle in front or merging into traffic.
arXiv Detail & Related papers (2023-03-29T16:12:45Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - End-to-end Interpretable Neural Motion Planner [78.69295676456085]
We propose a neural motion planner (NMP) for learning to drive autonomously in complex urban scenarios.
We design a holistic model that takes as input raw LIDAR data and a HD map and produces interpretable intermediate representations.
We demonstrate the effectiveness of our approach in real-world driving data captured in several cities in North America.
arXiv Detail & Related papers (2021-01-17T14:16:12Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - Probabilistic End-to-End Vehicle Navigation in Complex Dynamic
Environments with Multimodal Sensor Fusion [16.018962965273495]
All-day and all-weather navigation is a critical capability for autonomous driving.
We propose a probabilistic driving model with ultiperception capability utilizing the information from the camera, lidar and radar.
The results suggest that our proposed model outperforms baselines and achieves excellent generalization performance in unseen environments.
arXiv Detail & Related papers (2020-05-05T03:48:10Z) - VTGNet: A Vision-based Trajectory Generation Network for Autonomous
Vehicles in Urban Environments [26.558394047144006]
We develop an uncertainty-aware end-to-end trajectory generation method based on imitation learning.
Under various weather and lighting conditions, our network can reliably generate trajectories in different urban environments.
The proposed method achieves better cross-scene/platform driving results than the state-of-the-art (SOTA) end-to-end control method.
arXiv Detail & Related papers (2020-04-27T06:17:55Z) - End-to-end Autonomous Driving Perception with Sequential Latent
Representation Learning [34.61415516112297]
An end-to-end approach might clean up the system and avoid huge efforts of human engineering.
A latent space is introduced to capture all relevant features useful for perception, which is learned through sequential latent representation learning.
The learned end-to-end perception model is able to solve the detection, tracking, localization and mapping problems altogether with only minimum human engineering efforts.
arXiv Detail & Related papers (2020-03-21T05:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.