Towards Motion Forecasting with Real-World Perception Inputs: Are
End-to-End Approaches Competitive?
- URL: http://arxiv.org/abs/2306.09281v4
- Date: Tue, 5 Mar 2024 11:39:05 GMT
- Title: Towards Motion Forecasting with Real-World Perception Inputs: Are
End-to-End Approaches Competitive?
- Authors: Yihong Xu, Lo\"ick Chambon, \'Eloi Zablocki, Micka\"el Chen, Alexandre
Alahi, Matthieu Cord, Patrick P\'erez
- Abstract summary: We propose a unified evaluation pipeline for forecasting methods with real-world perception inputs.
Our in-depth study uncovers a substantial performance gap when transitioning from curated to perception-based data.
- Score: 93.10694819127608
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Motion forecasting is crucial in enabling autonomous vehicles to anticipate
the future trajectories of surrounding agents. To do so, it requires solving
mapping, detection, tracking, and then forecasting problems, in a multi-step
pipeline. In this complex system, advances in conventional forecasting methods
have been made using curated data, i.e., with the assumption of perfect maps,
detection, and tracking. This paradigm, however, ignores any errors from
upstream modules. Meanwhile, an emerging end-to-end paradigm, that tightly
integrates the perception and forecasting architectures into joint training,
promises to solve this issue. However, the evaluation protocols between the two
methods were so far incompatible and their comparison was not possible. In
fact, conventional forecasting methods are usually not trained nor tested in
real-world pipelines (e.g., with upstream detection, tracking, and mapping
modules). In this work, we aim to bring forecasting models closer to the
real-world deployment. First, we propose a unified evaluation pipeline for
forecasting methods with real-world perception inputs, allowing us to compare
conventional and end-to-end methods for the first time. Second, our in-depth
study uncovers a substantial performance gap when transitioning from curated to
perception-based data. In particular, we show that this gap (1) stems not only
from differences in precision but also from the nature of imperfect inputs
provided by perception modules, and that (2) is not trivially reduced by simply
finetuning on perception outputs. Based on extensive experiments, we provide
recommendations for critical areas that require improvement and guidance
towards more robust motion forecasting in the real world. The evaluation
library for benchmarking models under standardized and practical conditions is
provided: \url{https://github.com/valeoai/MFEval}.
Related papers
- RealTraj: Towards Real-World Pedestrian Trajectory Forecasting [10.332817296500533]
We propose a novel framework, RealTraj, that enhances the real-world applicability of trajectory forecasting.
We present Det2TrajFormer, a trajectory forecasting model that remains invariant in tracking noise by using past detections as inputs.
Unlike previous trajectory forecasting methods, our approach fine-tunes the model using only ground-truth detections, significantly reducing the need for costly person ID annotations.
arXiv Detail & Related papers (2024-11-26T12:35:26Z) - Valeo4Cast: A Modular Approach to End-to-End Forecasting [93.86257326005726]
Our solution ranks first in the Argoverse 2 End-to-end Forecasting Challenge, with 63.82 mAPf.
We depart from the current trend of tackling this task via end-to-end training from perception to forecasting, and instead use a modular approach.
We surpass forecasting results by +17.1 points over last year's winner and by +13.3 points over this year's runner-up.
arXiv Detail & Related papers (2024-06-12T11:50:51Z) - Streaming Motion Forecasting for Autonomous Driving [71.7468645504988]
We introduce a benchmark that queries future trajectories on streaming data and we refer to it as "streaming forecasting"
Our benchmark inherently captures the disappearance and re-appearance of agents, which is a safety-critical problem yet overlooked by snapshot-based benchmarks.
We propose a plug-and-play meta-algorithm called "Predictive Streamer" that can adapt any snapshot-based forecaster into a streaming forecaster.
arXiv Detail & Related papers (2023-10-02T17:13:16Z) - Forecasting from LiDAR via Future Object Detection [47.11167997187244]
We propose an end-to-end approach for detection and motion forecasting based on raw sensor measurement.
By linking future and current locations in a many-to-one manner, our approach is able to reason about multiple futures.
arXiv Detail & Related papers (2022-03-30T13:40:28Z) - Trajectory Forecasting from Detection with Uncertainty-Aware Motion
Encoding [121.66374635092097]
Trajectories obtained from object detection and tracking are inevitably noisy.
We propose a trajectory predictor directly based on detection results without relying on explicitly formed trajectories.
arXiv Detail & Related papers (2022-02-03T09:09:56Z) - MTP: Multi-Hypothesis Tracking and Prediction for Reduced Error
Propagation [39.41917241231786]
This paper addresses the problem of cascading errors by focusing on the coupling between the tracking and prediction modules.
By using state-of-the-art tracking and prediction tools, we conduct a comprehensive experimental evaluation of how severely errors stemming from tracking can impact prediction performance.
We show that this framework improves overall prediction performance over the standard single-hypothesis tracking-prediction pipeline by up to 34.2% on the nuScenes dataset.
arXiv Detail & Related papers (2021-10-18T17:30:59Z) - Injecting Knowledge in Data-driven Vehicle Trajectory Predictors [82.91398970736391]
Vehicle trajectory prediction tasks have been commonly tackled from two perspectives: knowledge-driven or data-driven.
In this paper, we propose to learn a "Realistic Residual Block" (RRB) which effectively connects these two perspectives.
Our proposed method outputs realistic predictions by confining the residual range and taking into account its uncertainty.
arXiv Detail & Related papers (2021-03-08T16:03:09Z) - Learning Prediction Intervals for Model Performance [1.433758865948252]
We propose a method to compute prediction intervals for model performance.
We evaluate our approach across a wide range of drift conditions and show substantial improvement over competitive baselines.
arXiv Detail & Related papers (2020-12-15T21:32:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.