LAformer: Trajectory Prediction for Autonomous Driving with Lane-Aware
Scene Constraints
- URL: http://arxiv.org/abs/2302.13933v1
- Date: Mon, 27 Feb 2023 16:34:16 GMT
- Title: LAformer: Trajectory Prediction for Autonomous Driving with Lane-Aware
Scene Constraints
- Authors: Mengmeng Liu, Hao Cheng, Lin Chen, Hellward Broszio, Jiangtao Li,
Runjiang Zhao, Monika Sester and Michael Ying Yang
- Abstract summary: Trajectory prediction for autonomous driving must continuously reason the motionity of road agents and comply with scene constraints.
Existing methods typically rely on one-stage trajectory prediction models, which condition future trajectories on observed trajectories combined with fused scene information.
We present a novel method, called LAformer, which uses a temporally dense lane-aware estimation module to select only the top highly potential lane segments in an HD map.
- Score: 16.861461971702596
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Trajectory prediction for autonomous driving must continuously reason the
motion stochasticity of road agents and comply with scene constraints. Existing
methods typically rely on one-stage trajectory prediction models, which
condition future trajectories on observed trajectories combined with fused
scene information. However, they often struggle with complex scene constraints,
such as those encountered at intersections. To this end, we present a novel
method, called LAformer. It uses a temporally dense lane-aware estimation
module to select only the top highly potential lane segments in an HD map,
which effectively and continuously aligns motion dynamics with scene
information, reducing the representation requirements for the subsequent
attention-based decoder by filtering out irrelevant lane segments.
Additionally, unlike one-stage prediction models, LAformer utilizes predictions
from the first stage as anchor trajectories and adds a second-stage motion
refinement module to further explore temporal consistency across the complete
time horizon. Extensive experiments on Argoverse 1 and nuScenes demonstrate
that LAformer achieves excellent performance for multimodal trajectory
prediction.
Related papers
- Motion Forecasting in Continuous Driving [41.6423398623095]
In autonomous driving, motion forecasting takes place repeatedly and continuously as the self-driving car moves.
Existing forecasting methods process each driving scene within a certain range independently.
We propose a novel motion forecasting framework for continuous driving, named RealMotion.
arXiv Detail & Related papers (2024-10-08T13:04:57Z) - AMP: Autoregressive Motion Prediction Revisited with Next Token Prediction for Autonomous Driving [59.94343412438211]
We introduce the GPT style next token motion prediction into motion prediction.
Different from language data which is composed of homogeneous units -words, the elements in the driving scene could have complex spatial-temporal and semantic relations.
We propose to adopt three factorized attention modules with different neighbors for information aggregation and different position encoding styles to capture their relations.
arXiv Detail & Related papers (2024-03-20T06:22:37Z) - Layout Sequence Prediction From Noisy Mobile Modality [53.49649231056857]
Trajectory prediction plays a vital role in understanding pedestrian movement for applications such as autonomous driving and robotics.
Current trajectory prediction models depend on long, complete, and accurately observed sequences from visual modalities.
We propose LTrajDiff, a novel approach that treats objects obstructed or out of sight as equally important as those with fully visible trajectories.
arXiv Detail & Related papers (2023-10-09T20:32:49Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - An End-to-End Vehicle Trajcetory Prediction Framework [3.7311680121118345]
An accurate prediction of a future trajectory does not just rely on the previous trajectory, but also a simulation of the complex interactions between other vehicles nearby.
Most state-of-the-art networks built to tackle the problem assume readily available past trajectory points.
We propose a novel end-to-end architecture that takes raw video inputs and outputs future trajectory predictions.
arXiv Detail & Related papers (2023-04-19T15:42:03Z) - Monocular BEV Perception of Road Scenes via Front-to-Top View Projection [57.19891435386843]
We present a novel framework that reconstructs a local map formed by road layout and vehicle occupancy in the bird's-eye view.
Our model runs at 25 FPS on a single GPU, which is efficient and applicable for real-time panorama HD map reconstruction.
arXiv Detail & Related papers (2022-11-15T13:52:41Z) - Ellipse Loss for Scene-Compliant Motion Prediction [12.446392441065065]
We propose a novel ellipse loss that allows the models to better reason about scene compliance and predict more realistic trajectories.
Ellipse loss penalizes off-road predictions directly in a supervised manner, by projecting the output trajectories into the top-down map frame.
It takes into account actor dimensions and orientation, providing more direct training signals to the model.
arXiv Detail & Related papers (2020-11-05T23:33:56Z) - Map-Adaptive Goal-Based Trajectory Prediction [3.1948816877289263]
We present a new method for multi-modal, long-term vehicle trajectory prediction.
Our approach relies on using lane centerlines captured in rich maps of the environment to generate a set of proposed goal paths for each vehicle.
We show that our model outperforms state-of-the-art approaches for vehicle trajectory prediction over a 6-second horizon.
arXiv Detail & Related papers (2020-09-09T17:57:01Z) - Implicit Latent Variable Model for Scene-Consistent Motion Forecasting [78.74510891099395]
In this paper, we aim to learn scene-consistent motion forecasts of complex urban traffic directly from sensor data.
We model the scene as an interaction graph and employ powerful graph neural networks to learn a distributed latent representation of the scene.
arXiv Detail & Related papers (2020-07-23T14:31:25Z) - Physically constrained short-term vehicle trajectory forecasting with
naive semantic maps [6.85316573653194]
We propose a model that learns to extract relevant road features from semantic maps as well as general motion of agents.
We show that our model is not only capable of anticipating future motion whilst taking into consideration road boundaries, but can also effectively and precisely predict trajectories for a longer time horizon than initially trained for.
arXiv Detail & Related papers (2020-06-09T09:52:44Z) - A Spatial-Temporal Attentive Network with Spatial Continuity for
Trajectory Prediction [74.00750936752418]
We propose a novel model named spatial-temporal attentive network with spatial continuity (STAN-SC)
First, spatial-temporal attention mechanism is presented to explore the most useful and important information.
Second, we conduct a joint feature sequence based on the sequence and instant state information to make the generative trajectories keep spatial continuity.
arXiv Detail & Related papers (2020-03-13T04:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.