Two-Stream Networks for Lane-Change Prediction of Surrounding Vehicles
- URL: http://arxiv.org/abs/2008.10869v1
- Date: Tue, 25 Aug 2020 07:59:15 GMT
- Title: Two-Stream Networks for Lane-Change Prediction of Surrounding Vehicles
- Authors: David Fern\'andez-Llorca, Mahdi Biparva, Rub\'en Izquierdo-Gonzalo and
John K. Tsotsos
- Abstract summary: In highway scenarios, an alert human driver will typically anticipate early cut-in and cut-out maneuvers surrounding vehicles using only visual cues.
To deal with lane-change recognition and prediction of surrounding vehicles, we pose the problem as an action recognition/prediction problem by stacking visual cues from video cameras.
Two video action recognition approaches are analyzed: two-stream convolutional networks and multiplier networks.
- Score: 8.828423067460644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In highway scenarios, an alert human driver will typically anticipate early
cut-in and cut-out maneuvers of surrounding vehicles using only visual cues. An
automated system must anticipate these situations at an early stage too, to
increase the safety and the efficiency of its performance. To deal with
lane-change recognition and prediction of surrounding vehicles, we pose the
problem as an action recognition/prediction problem by stacking visual cues
from video cameras. Two video action recognition approaches are analyzed:
two-stream convolutional networks and spatiotemporal multiplier networks.
Different sizes of the regions around the vehicles are analyzed, evaluating the
importance of the interaction between vehicles and the context information in
the performance. In addition, different prediction horizons are evaluated. The
obtained results demonstrate the potential of these methodologies to serve as
robust predictors of future lane-changes of surrounding vehicles in time
horizons between 1 and 2 seconds.
Related papers
- PIP-Net: Pedestrian Intention Prediction in the Wild [11.799731429829603]
PIP-Net is a novel framework designed to predict pedestrian crossing intentions by AVs in real-world urban scenarios.
We offer two variants of PIP-Net designed for different camera mounts and setups.
The proposed model employs a recurrent and temporal attention-based solution, outperforming state-of-the-art performance.
For the first time, we present the Urban-PIP dataset, a customised pedestrian intention prediction dataset.
arXiv Detail & Related papers (2024-02-20T08:28:45Z) - BEVSeg2TP: Surround View Camera Bird's-Eye-View Based Joint Vehicle
Segmentation and Ego Vehicle Trajectory Prediction [4.328789276903559]
Trajectory prediction is a key task for vehicle autonomy.
There is a growing interest in learning-based trajectory prediction.
We show that there is the potential to improve the performance of perception.
arXiv Detail & Related papers (2023-12-20T15:02:37Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - Multi-Vehicle Trajectory Prediction at Intersections using State and
Intention Information [50.40632021583213]
Traditional approaches to prediction of future trajectory of road agents rely on knowing information about their past trajectory.
This work instead relies on having knowledge of the current state and intended direction to make predictions for multiple vehicles at intersections.
Message passing of this information between the vehicles provides each one of them a more holistic overview of the environment.
arXiv Detail & Related papers (2023-01-06T15:13:23Z) - Predicting highway lane-changing maneuvers: A benchmark analysis of
machine and ensemble learning algorithms [0.0]
We compare different machine and ensemble learning classification techniques to the rule-based model.
We predict two types of discretionary lane-change maneuvers: Overtaking (from slow to fast lane) and fold-down.
If the rule-based model provides limited predicting accuracy, especially in case of fold-down, the data-based algorithms, devoid of modeling bias, allow significant prediction improvements.
arXiv Detail & Related papers (2022-04-20T22:55:59Z) - Early Lane Change Prediction for Automated Driving Systems Using
Multi-Task Attention-based Convolutional Neural Networks [8.60064151720158]
Lane change (LC) is one of the safety-critical manoeuvres in highway driving.
reliably predicting such manoeuvre in advance is critical for the safe and comfortable operation of automated driving systems.
This paper proposes a novel multi-task model to simultaneously estimate the likelihood of LC manoeuvres and the time-to-lane-change.
arXiv Detail & Related papers (2021-09-22T13:59:27Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Video action recognition for lane-change classification and prediction
of surrounding vehicles [12.127050913280925]
Lane-change recognition and prediction tasks are posed as video action recognition problems.
We study the influence of context and observation horizons on performance, and different prediction horizons are analyzed.
The obtained results clearly demonstrate the potential of these methodologies to serve as robust predictors of future lane-changes of surrounding vehicles.
arXiv Detail & Related papers (2021-01-13T13:25:00Z) - Safety-Oriented Pedestrian Motion and Scene Occupancy Forecasting [91.69900691029908]
We advocate for predicting both the individual motions as well as the scene occupancy map.
We propose a Scene-Actor Graph Neural Network (SA-GNN) which preserves the relative spatial information of pedestrians.
On two large-scale real-world datasets, we showcase that our scene-occupancy predictions are more accurate and better calibrated than those from state-of-the-art motion forecasting methods.
arXiv Detail & Related papers (2021-01-07T06:08:21Z) - What-If Motion Prediction for Autonomous Driving [58.338520347197765]
Viable solutions must account for both the static geometric context, such as road lanes, and dynamic social interactions arising from multiple actors.
We propose a recurrent graph-based attentional approach with interpretable geometric (actor-lane) and social (actor-actor) relationships.
Our model can produce diverse predictions conditioned on hypothetical or "what-if" road lanes and multi-actor interactions.
arXiv Detail & Related papers (2020-08-24T17:49:30Z) - V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and
Prediction [74.42961817119283]
We use vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles.
By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints.
arXiv Detail & Related papers (2020-08-17T17:58:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.