Enhancing Mapless Trajectory Prediction through Knowledge Distillation
- URL: http://arxiv.org/abs/2306.14177v1
- Date: Sun, 25 Jun 2023 09:05:48 GMT
- Title: Enhancing Mapless Trajectory Prediction through Knowledge Distillation
- Authors: Yuning Wang, Pu Zhang, Lei Bai, Jianru Xue
- Abstract summary: High-definition maps (HD maps) may suffer from the high cost of annotation or restrictions of law that limits their widespread use.
In this paper, we tackle the problem of improving the consistency of multi-modal prediction trajectories and the real road topology.
Our solution is generalizable for common trajectory prediction networks and does not bring extra computation burden.
- Score: 19.626383744807068
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Scene information plays a crucial role in trajectory forecasting systems for
autonomous driving by providing semantic clues and constraints on potential
future paths of traffic agents. Prevalent trajectory prediction techniques
often take high-definition maps (HD maps) as part of the inputs to provide
scene knowledge. Although HD maps offer accurate road information, they may
suffer from the high cost of annotation or restrictions of law that limits
their widespread use. Therefore, those methods are still expected to generate
reliable prediction results in mapless scenarios. In this paper, we tackle the
problem of improving the consistency of multi-modal prediction trajectories and
the real road topology when map information is unavailable during the test
phase. Specifically, we achieve this by training a map-based prediction teacher
network on the annotated samples and transferring the knowledge to a student
mapless prediction network using a two-fold knowledge distillation framework.
Our solution is generalizable for common trajectory prediction networks and
does not bring extra computation burden. Experimental results show that our
method stably improves prediction performance in mapless mode on many widely
used state-of-the-art trajectory prediction baselines, compensating for the
gaps caused by the absence of HD maps. Qualitative visualization results
demonstrate that our approach helps infer unseen map information.
Related papers
- Towards Consistent and Explainable Motion Prediction using Heterogeneous Graph Attention [0.17476232824732776]
This paper introduces a new refinement module designed to project the predicted trajectories back onto the actual map.
We also propose a novel scene encoder that handles all relations between agents and their environment in a single unified graph attention network.
arXiv Detail & Related papers (2024-05-16T14:31:15Z) - SemanticFormer: Holistic and Semantic Traffic Scene Representation for Trajectory Prediction using Knowledge Graphs [3.733790302392792]
Tray prediction in autonomous driving relies on accurate representation of all relevant contexts of the driving scene.
We present SemanticFormer, an approach for predicting multimodal trajectories by reasoning over a traffic scene graph.
arXiv Detail & Related papers (2024-04-30T09:11:04Z) - Augmenting Lane Perception and Topology Understanding with Standard
Definition Navigation Maps [51.24861159115138]
Standard Definition (SD) maps are more affordable and have worldwide coverage, offering a scalable alternative.
We propose a novel framework to integrate SD maps into online map prediction and propose a Transformer-based encoder, SD Map Representations from transFormers.
This enhancement consistently and significantly boosts (by up to 60%) lane detection and topology prediction on current state-of-the-art online map prediction methods.
arXiv Detail & Related papers (2023-11-07T15:42:22Z) - Transformer-Based Neural Surrogate for Link-Level Path Loss Prediction
from Variable-Sized Maps [11.327456466796681]
Estimating path loss for a transmitter-receiver location is key to many use-cases including network planning and handover.
We present a transformer-based neural network architecture that enables predicting link-level properties from maps of various dimensions and from sparse measurements.
arXiv Detail & Related papers (2023-10-06T20:17:40Z) - Pre-training on Synthetic Driving Data for Trajectory Prediction [64.16991399882477]
We aim to tackle the challenge of learning general trajectory forecasting representations under limited data availability.
We take advantage of graph representations of HD-map and apply vector transformations to reshape the maps.
We employ a rule-based model to generate trajectories based on augmented scenes.
arXiv Detail & Related papers (2023-09-18T19:49:22Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - PreTraM: Self-Supervised Pre-training via Connecting Trajectory and Map [58.53373202647576]
We propose PreTraM, a self-supervised pre-training scheme for trajectory forecasting.
It consists of two parts: 1) Trajectory-Map Contrastive Learning, where we project trajectories and maps to a shared embedding space with cross-modal contrastive learning, and 2) Map Contrastive Learning, where we enhance map representation with contrastive learning on large quantities of HD-maps.
On top of popular baselines such as AgentFormer and Trajectron++, PreTraM boosts their performance by 5.5% and 6.9% relatively in FDE-10 on the challenging nuScenes dataset.
arXiv Detail & Related papers (2022-04-21T23:01:21Z) - End-to-End Trajectory Distribution Prediction Based on Occupancy Grid
Maps [29.67295706224478]
In this paper, we aim to forecast a future trajectory distribution of a moving agent in the real world, given the social scene images and historical trajectories.
We learn the distribution with symmetric cross-entropy using occupancy grid maps as an explicit and scene-compliant approximation to the ground-truth distribution.
In experiments, our method achieves state-of-the-art performance on the Stanford Drone dataset and Intersection Drone dataset.
arXiv Detail & Related papers (2022-03-31T09:24:32Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - SLPC: a VRNN-based approach for stochastic lidar prediction and
completion in autonomous driving [63.87272273293804]
We propose a new LiDAR prediction framework that is based on generative models namely Variational Recurrent Neural Networks (VRNNs)
Our algorithm is able to address the limitations of previous video prediction frameworks when dealing with sparse data by spatially inpainting the depth maps in the upcoming frames.
We present a sparse version of VRNNs and an effective self-supervised training method that does not require any labels.
arXiv Detail & Related papers (2021-02-19T11:56:44Z) - Motion Prediction using Trajectory Sets and Self-Driving Domain
Knowledge [3.0938904602244355]
We build on classification-based approaches to motion prediction by adding an auxiliary loss that penalizes off-road predictions.
This auxiliary loss can easily be pretrained using only map information, which significantly improves performance on small datasets.
Our final contribution is a detailed comparison of classification and ordinal regression on two public self-driving datasets.
arXiv Detail & Related papers (2020-06-08T17:37:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.