VTGNet: A Vision-based Trajectory Generation Network for Autonomous
Vehicles in Urban Environments
- URL: http://arxiv.org/abs/2004.12591v3
- Date: Fri, 23 Oct 2020 08:46:58 GMT
- Title: VTGNet: A Vision-based Trajectory Generation Network for Autonomous
Vehicles in Urban Environments
- Authors: Peide Cai, Yuxiang Sun, Hengli Wang, Ming Liu
- Abstract summary: We develop an uncertainty-aware end-to-end trajectory generation method based on imitation learning.
Under various weather and lighting conditions, our network can reliably generate trajectories in different urban environments.
The proposed method achieves better cross-scene/platform driving results than the state-of-the-art (SOTA) end-to-end control method.
- Score: 26.558394047144006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional methods for autonomous driving are implemented with many building
blocks from perception, planning and control, making them difficult to
generalize to varied scenarios due to complex assumptions and
interdependencies. Recently, the end-to-end driving method has emerged, which
performs well and generalizes to new environments by directly learning from
export-provided data. However, many existing methods on this topic neglect to
check the confidence of the driving actions and the ability to recover from
driving mistakes. In this paper, we develop an uncertainty-aware end-to-end
trajectory generation method based on imitation learning. It can extract
spatiotemporal features from the front-view camera images for scene
understanding, and then generate collision-free trajectories several seconds
into the future. The experimental results suggest that under various weather
and lighting conditions, our network can reliably generate trajectories in
different urban environments, such as turning at intersections and slowing down
for collision avoidance. Furthermore, closed-loop driving tests suggest that
the proposed method achieves better cross-scene/platform driving results than
the state-of-the-art (SOTA) end-to-end control method, where our model can
recover from off-center and off-orientation errors and capture 80% of dangerous
cases with high uncertainty estimations.
Related papers
- Building Real-time Awareness of Out-of-distribution in Trajectory Prediction for Autonomous Vehicles [8.398221841050349]
Trajectory prediction describes the motions of surrounding moving obstacles for an autonomous vehicle.
In this paper, we aim to establish real-time awareness of out-of-distribution in trajectory prediction for autonomous vehicles.
Our solutions are lightweight and can handle the occurrence of out-of-distribution at any time during trajectory prediction inference.
arXiv Detail & Related papers (2024-09-25T18:43:58Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - A Memory-Augmented Multi-Task Collaborative Framework for Unsupervised
Traffic Accident Detection in Driving Videos [22.553356096143734]
We propose a novel memory-augmented multi-task collaborative framework (MAMTCF) for unsupervised traffic accident detection in driving videos.
Our method can more accurately detect both ego-involved and non-ego accidents by simultaneously modeling appearance changes and object motions in video frames.
arXiv Detail & Related papers (2023-07-27T01:45:13Z) - Unsupervised Adaptation from Repeated Traversals for Autonomous Driving [54.59577283226982]
Self-driving cars must generalize to the end-user's environment to operate reliably.
One potential solution is to leverage unlabeled data collected from the end-users' environments.
There is no reliable signal in the target domain to supervise the adaptation process.
We show that this simple additional assumption is sufficient to obtain a potent signal that allows us to perform iterative self-training of 3D object detectors on the target domain.
arXiv Detail & Related papers (2023-03-27T15:07:55Z) - Learning Representation for Anomaly Detection of Vehicle Trajectories [15.20257956793474]
Predicting the future trajectories of surrounding vehicles based on their history trajectories is a critical task in autonomous driving.
Small crafted perturbations can significantly mislead the future trajectory prediction module of the ego vehicle.
We propose two novel methods for learning effective and efficient representations for online anomaly detection of vehicle trajectories.
arXiv Detail & Related papers (2023-03-09T02:48:59Z) - Unsupervised Driving Event Discovery Based on Vehicle CAN-data [62.997667081978825]
This work presents a simultaneous clustering and segmentation approach for vehicle CAN-data that identifies common driving events in an unsupervised manner.
We evaluate our approach with a dataset of real Tesla Model 3 vehicle CAN-data and a two-hour driving session that we annotated with different driving events.
arXiv Detail & Related papers (2023-01-12T13:10:47Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - An End-to-end Deep Reinforcement Learning Approach for the Long-term
Short-term Planning on the Frenet Space [0.0]
This paper presents a novel end-to-end continuous deep reinforcement learning approach towards autonomous cars' decision-making and motion planning.
For the first time, we define both states and action spaces on the Frenet space to make the driving behavior less variant to the road curvatures.
The algorithm generates continuoustemporal trajectories on the Frenet frame for the feedback controller to track.
arXiv Detail & Related papers (2020-11-26T02:40:07Z) - PLOP: Probabilistic poLynomial Objects trajectory Planning for
autonomous driving [8.105493956485583]
We use a conditional imitation learning algorithm to predict trajectories for ego vehicle and its neighbors.
Our approach is computationally efficient and relies only on on-board sensors.
We evaluate our method offline on the publicly available dataset nuScenes.
arXiv Detail & Related papers (2020-03-09T16:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.