Vehicle Trajectory Prediction in Crowded Highway Scenarios Using Bird
Eye View Representations and CNNs
- URL: http://arxiv.org/abs/2008.11493v1
- Date: Wed, 26 Aug 2020 11:15:49 GMT
- Title: Vehicle Trajectory Prediction in Crowded Highway Scenarios Using Bird
Eye View Representations and CNNs
- Authors: R. Izquierdo, A. Quintanar, I. Parra, D. Fernandez-Llorca, and M. A.
Sotelo
- Abstract summary: This paper describes a novel approach to perform vehicle trajectory predictions employing graphic representations.
The problem is faced as an image to image regression problem training the network to learn the underlying relations between the traffic participants.
The model has been tested in highway scenarios with more than 30 vehicles simultaneously in two opposite traffic flow streams.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes a novel approach to perform vehicle trajectory
predictions employing graphic representations. The vehicles are represented
using Gaussian distributions into a Bird Eye View. Then the U-net model is used
to perform sequence to sequence predictions. This deep learning-based
methodology has been trained using the HighD dataset, which contains vehicles'
detection in a highway scenario from aerial imagery. The problem is faced as an
image to image regression problem training the network to learn the underlying
relations between the traffic participants. This approach generates an
estimation of the future appearance of the input scene, not trajectories or
numeric positions. An extra step is conducted to extract the positions from the
predicted representation with subpixel resolution. Different network
configurations have been tested, and prediction error up to three seconds ahead
is in the order of the representation resolution. The model has been tested in
highway scenarios with more than 30 vehicles simultaneously in two opposite
traffic flow streams showing good qualitative and quantitative results.
Related papers
- SGD: Street View Synthesis with Gaussian Splatting and Diffusion Prior [53.52396082006044]
Current methods struggle to maintain rendering quality at the viewpoint that deviates significantly from the training viewpoints.
This issue stems from the sparse training views captured by a fixed camera on a moving vehicle.
We propose a novel approach that enhances the capacity of 3DGS by leveraging prior from a Diffusion Model.
arXiv Detail & Related papers (2024-03-29T09:20:29Z) - BEVSeg2TP: Surround View Camera Bird's-Eye-View Based Joint Vehicle
Segmentation and Ego Vehicle Trajectory Prediction [4.328789276903559]
Trajectory prediction is a key task for vehicle autonomy.
There is a growing interest in learning-based trajectory prediction.
We show that there is the potential to improve the performance of perception.
arXiv Detail & Related papers (2023-12-20T15:02:37Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - Multi-Vehicle Trajectory Prediction at Intersections using State and
Intention Information [50.40632021583213]
Traditional approaches to prediction of future trajectory of road agents rely on knowing information about their past trajectory.
This work instead relies on having knowledge of the current state and intended direction to make predictions for multiple vehicles at intersections.
Message passing of this information between the vehicles provides each one of them a more holistic overview of the environment.
arXiv Detail & Related papers (2023-01-06T15:13:23Z) - Vehicle Trajectory Prediction on Highways Using Bird Eye View
Representations and Deep Learning [0.5420492913071214]
This work presents a novel method for predicting vehicle trajectories in highway scenarios using efficient bird's eye view representations and convolutional neural networks.
The U-net model has been selected as the prediction kernel to generate future visual representations of the scene using an image-to-image regression approach.
A method has been implemented to extract vehicle positions from the generated graphical representations to achieve subpixel resolution.
arXiv Detail & Related papers (2022-07-04T13:39:46Z) - ParkPredict+: Multimodal Intent and Motion Prediction for Vehicles in
Parking Lots with CNN and Transformer [11.287187018907284]
multimodal intent and trajectory prediction for human-driven vehicles in parking lots is addressed in this paper.
Using models designed with CNN and Transformer networks, we extract temporal-spatial and contextual information from trajectory history and local bird's eye view semantic images.
Our methods outperforms existing models in accuracy, while allowing an arbitrary number of modes.
In addition, we present the first public human driving dataset in parking lot with high resolution and rich traffic scenarios.
arXiv Detail & Related papers (2022-04-17T01:54:25Z) - Vision-Guided Forecasting -- Visual Context for Multi-Horizon Time
Series Forecasting [0.6947442090579469]
We tackle multi-horizon forecasting of vehicle states by fusing the two modalities.
We design and experiment with 3D convolutions for visual features extraction and 1D convolutions for features extraction from speed and steering angle traces.
We show that we are able to forecast a vehicle's state to various horizons, while outperforming the current state-of-the-art results on the related task of driving state estimation.
arXiv Detail & Related papers (2021-07-27T08:52:40Z) - Two-Stream Networks for Lane-Change Prediction of Surrounding Vehicles [8.828423067460644]
In highway scenarios, an alert human driver will typically anticipate early cut-in and cut-out maneuvers surrounding vehicles using only visual cues.
To deal with lane-change recognition and prediction of surrounding vehicles, we pose the problem as an action recognition/prediction problem by stacking visual cues from video cameras.
Two video action recognition approaches are analyzed: two-stream convolutional networks and multiplier networks.
arXiv Detail & Related papers (2020-08-25T07:59:15Z) - VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized
Representation [74.56282712099274]
This paper introduces VectorNet, a hierarchical graph neural network that exploits the spatial locality of individual road components represented by vectors.
By operating on the vectorized high definition (HD) maps and agent trajectories, we avoid lossy rendering and computationally intensive ConvNet encoding steps.
We evaluate VectorNet on our in-house behavior prediction benchmark and the recently released Argoverse forecasting dataset.
arXiv Detail & Related papers (2020-05-08T19:07:03Z) - Action Sequence Predictions of Vehicles in Urban Environments using Map
and Social Context [152.0714518512966]
This work studies the problem of predicting the sequence of future actions for surround vehicles in real-world driving scenarios.
The first contribution is an automatic method to convert the trajectories recorded in real-world driving scenarios to action sequences with the help of HD maps.
The second contribution lies in applying the method to the well-known traffic agent tracking and prediction dataset Argoverse, resulting in 228,000 action sequences.
The third contribution is to propose a novel action sequence prediction method by integrating past positions and velocities of the traffic agents, map information and social context into a single end-to-end trainable neural network
arXiv Detail & Related papers (2020-04-29T14:59:58Z) - TPNet: Trajectory Proposal Network for Motion Prediction [81.28716372763128]
Trajectory Proposal Network (TPNet) is a novel two-stage motion prediction framework.
TPNet first generates a candidate set of future trajectories as hypothesis proposals, then makes the final predictions by classifying and refining the proposals.
Experiments on four large-scale trajectory prediction datasets, show that TPNet achieves the state-of-the-art results both quantitatively and qualitatively.
arXiv Detail & Related papers (2020-04-26T00:01:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.