Valeo4Cast: A Modular Approach to End-to-End Forecasting
- URL: http://arxiv.org/abs/2406.08113v3
- Date: Thu, 26 Sep 2024 16:14:54 GMT
- Title: Valeo4Cast: A Modular Approach to End-to-End Forecasting
- Authors: Yihong Xu, Éloi Zablocki, Alexandre Boulch, Gilles Puy, Mickael Chen, Florent Bartoccioni, Nermin Samet, Oriane Siméoni, Spyros Gidaris, Tuan-Hung Vu, Andrei Bursuc, Eduardo Valle, Renaud Marlet, Matthieu Cord,
- Abstract summary: Our solution ranks first in the Argoverse 2 End-to-end Forecasting Challenge, with 63.82 mAPf.
We depart from the current trend of tackling this task via end-to-end training from perception to forecasting, and instead use a modular approach.
We surpass forecasting results by +17.1 points over last year's winner and by +13.3 points over this year's runner-up.
- Score: 93.86257326005726
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Motion forecasting is crucial in autonomous driving systems to anticipate the future trajectories of surrounding agents such as pedestrians, vehicles, and traffic signals. In end-to-end forecasting, the model must jointly detect and track from sensor data (cameras or LiDARs) the past trajectories of the different elements of the scene and predict their future locations. We depart from the current trend of tackling this task via end-to-end training from perception to forecasting, and instead use a modular approach. We individually build and train detection, tracking and forecasting modules. We then only use consecutive finetuning steps to integrate the modules better and alleviate compounding errors. We conduct an in-depth study on the finetuning strategies and it reveals that our simple yet effective approach significantly improves performance on the end-to-end forecasting benchmark. Consequently, our solution ranks first in the Argoverse 2 End-to-end Forecasting Challenge, with 63.82 mAPf. We surpass forecasting results by +17.1 points over last year's winner and by +13.3 points over this year's runner-up. This remarkable performance in forecasting can be explained by our modular paradigm, which integrates finetuning strategies and significantly outperforms the end-to-end-trained counterparts. The code, model weights and results are made available https://github.com/valeoai/valeo4cast.
Related papers
- Multi-Agent Trajectory Prediction with Difficulty-Guided Feature Enhancement Network [1.5888246742280365]
Trajectory prediction is crucial for autonomous driving as it aims to forecast future movements of traffic participants.
Traditional methods usually perform holistic inference on trajectories of agents, neglecting the differences in difficulty among agents.
This paper proposes a novel DifficultyGuided Feature Enhancement (DGFNet), which leverages the prediction difficulty differences among agents.
arXiv Detail & Related papers (2024-07-26T07:04:30Z) - Pre-training on Synthetic Driving Data for Trajectory Prediction [61.520225216107306]
We propose a pipeline-level solution to mitigate the issue of data scarcity in trajectory forecasting.
We adopt HD map augmentation and trajectory synthesis for generating driving data, and then we learn representations by pre-training on them.
We conduct extensive experiments to demonstrate the effectiveness of our data expansion and pre-training strategies.
arXiv Detail & Related papers (2023-09-18T19:49:22Z) - Towards Motion Forecasting with Real-World Perception Inputs: Are
End-to-End Approaches Competitive? [93.10694819127608]
We propose a unified evaluation pipeline for forecasting methods with real-world perception inputs.
Our in-depth study uncovers a substantial performance gap when transitioning from curated to perception-based data.
arXiv Detail & Related papers (2023-06-15T17:03:14Z) - Forecasting from LiDAR via Future Object Detection [47.11167997187244]
We propose an end-to-end approach for detection and motion forecasting based on raw sensor measurement.
By linking future and current locations in a many-to-one manner, our approach is able to reason about multiple futures.
arXiv Detail & Related papers (2022-03-30T13:40:28Z) - Transforming Model Prediction for Tracking [109.08417327309937]
Transformers capture global relations with little inductive bias, allowing it to learn the prediction of more powerful target models.
We train the proposed tracker end-to-end and validate its performance by conducting comprehensive experiments on multiple tracking datasets.
Our tracker sets a new state of the art on three benchmarks, achieving an AUC of 68.5% on the challenging LaSOT dataset.
arXiv Detail & Related papers (2022-03-21T17:59:40Z) - Sliding Sequential CVAE with Time Variant Socially-aware Rethinking for
Trajectory Prediction [13.105275905781632]
Pedestrian trajectory prediction is a key technology in many applications such as video surveillance, social robot navigation, and autonomous driving.
This work proposes a novel trajectory prediction method called CSR, which consists of a cascaded conditional autoencoder (CVAE) module and a socially-aware regression module.
Experiments results demonstrate that the proposed method exhibits improvements over state-of-the-art method on the Stanford Drone dataset.
arXiv Detail & Related papers (2021-10-28T10:56:21Z) - The Importance of Prior Knowledge in Precise Multimodal Prediction [71.74884391209955]
Roads have well defined geometries, topologies, and traffic rules.
In this paper we propose to incorporate structured priors as a loss function.
We demonstrate the effectiveness of our approach on real-world self-driving datasets.
arXiv Detail & Related papers (2020-06-04T03:56:11Z) - PnPNet: End-to-End Perception and Prediction with Tracking in the Loop [82.97006521937101]
We tackle the problem of joint perception and motion forecasting in the context of self-driving vehicles.
We propose Net, an end-to-end model that takes as input sensor data, and outputs at each time step object tracks and their future level.
arXiv Detail & Related papers (2020-05-29T17:57:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.