PnPNet: End-to-End Perception and Prediction with Tracking in the Loop
- URL: http://arxiv.org/abs/2005.14711v2
- Date: Sat, 27 Jun 2020 21:32:07 GMT
- Title: PnPNet: End-to-End Perception and Prediction with Tracking in the Loop
- Authors: Ming Liang, Bin Yang, Wenyuan Zeng, Yun Chen, Rui Hu, Sergio Casas,
Raquel Urtasun
- Abstract summary: We tackle the problem of joint perception and motion forecasting in the context of self-driving vehicles.
We propose Net, an end-to-end model that takes as input sensor data, and outputs at each time step object tracks and their future level.
- Score: 82.97006521937101
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We tackle the problem of joint perception and motion forecasting in the
context of self-driving vehicles. Towards this goal we propose PnPNet, an
end-to-end model that takes as input sequential sensor data, and outputs at
each time step object tracks and their future trajectories. The key component
is a novel tracking module that generates object tracks online from detections
and exploits trajectory level features for motion forecasting. Specifically,
the object tracks get updated at each time step by solving both the data
association problem and the trajectory estimation problem. Importantly, the
whole model is end-to-end trainable and benefits from joint optimization of all
tasks. We validate PnPNet on two large-scale driving datasets, and show
significant improvements over the state-of-the-art with better occlusion
recovery and more accurate future prediction.
Related papers
- Enhancing End-to-End Autonomous Driving with Latent World Model [78.22157677787239]
We propose a novel self-supervised method to enhance end-to-end driving without the need for costly labels.
Our framework textbfLAW uses a LAtent World model to predict future latent features based on the predicted ego actions and the latent feature of the current frame.
As a result, our approach achieves state-of-the-art performance in both open-loop and closed-loop benchmarks without costly annotations.
arXiv Detail & Related papers (2024-06-12T17:59:21Z) - Valeo4Cast: A Modular Approach to End-to-End Forecasting [93.86257326005726]
Our solution ranks first in the Argoverse 2 End-to-end Forecasting Challenge, with 63.82 mAPf.
We depart from the current trend of tackling this task via end-to-end training from perception to forecasting, and instead use a modular approach.
We surpass forecasting results by +17.1 points over last year's winner and by +13.3 points over this year's runner-up.
arXiv Detail & Related papers (2024-06-12T11:50:51Z) - JRDB-Traj: A Dataset and Benchmark for Trajectory Forecasting in Crowds [79.00975648564483]
Trajectory forecasting models, employed in fields such as robotics, autonomous vehicles, and navigation, face challenges in real-world scenarios.
This dataset provides comprehensive data, including the locations of all agents, scene images, and point clouds, all from the robot's perspective.
The objective is to predict the future positions of agents relative to the robot using raw sensory input data.
arXiv Detail & Related papers (2023-11-05T18:59:31Z) - Interaction-Aware Personalized Vehicle Trajectory Prediction Using
Temporal Graph Neural Networks [8.209194305630229]
Existing methods mainly rely on generic trajectory predictions from large datasets.
We propose an approach for interaction-aware personalized vehicle trajectory prediction that incorporates temporal graph neural networks.
arXiv Detail & Related papers (2023-08-14T20:20:26Z) - An End-to-End Framework of Road User Detection, Tracking, and Prediction
from Monocular Images [11.733622044569486]
We build an end-to-end framework for detection, tracking, and trajectory prediction called ODTP.
It adopts the state-of-the-art online multi-object tracking model, QD-3DT, for perception and trains the trajectory predictor, DCENet++, directly based on the detection results.
We evaluate the performance of ODTP on the widely used nuScenes dataset for autonomous driving.
arXiv Detail & Related papers (2023-08-09T15:46:25Z) - An End-to-End Vehicle Trajcetory Prediction Framework [3.7311680121118345]
An accurate prediction of a future trajectory does not just rely on the previous trajectory, but also a simulation of the complex interactions between other vehicles nearby.
Most state-of-the-art networks built to tackle the problem assume readily available past trajectory points.
We propose a novel end-to-end architecture that takes raw video inputs and outputs future trajectory predictions.
arXiv Detail & Related papers (2023-04-19T15:42:03Z) - Transforming Model Prediction for Tracking [109.08417327309937]
Transformers capture global relations with little inductive bias, allowing it to learn the prediction of more powerful target models.
We train the proposed tracker end-to-end and validate its performance by conducting comprehensive experiments on multiple tracking datasets.
Our tracker sets a new state of the art on three benchmarks, achieving an AUC of 68.5% on the challenging LaSOT dataset.
arXiv Detail & Related papers (2022-03-21T17:59:40Z) - STINet: Spatio-Temporal-Interactive Network for Pedestrian Detection and
Trajectory Prediction [24.855059537779294]
We present a novel end-to-end two-stage network: Spatio--Interactive Network (STINet)
In addition to 3D geometry of pedestrians, we model temporal information for each of the pedestrians.
Our method predicts both current and past locations in the first stage, so that each pedestrian can be linked across frames.
arXiv Detail & Related papers (2020-05-08T18:43:01Z) - TPNet: Trajectory Proposal Network for Motion Prediction [81.28716372763128]
Trajectory Proposal Network (TPNet) is a novel two-stage motion prediction framework.
TPNet first generates a candidate set of future trajectories as hypothesis proposals, then makes the final predictions by classifying and refining the proposals.
Experiments on four large-scale trajectory prediction datasets, show that TPNet achieves the state-of-the-art results both quantitatively and qualitatively.
arXiv Detail & Related papers (2020-04-26T00:01:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.