STEP: Structured Training and Evaluation Platform for benchmarking trajectory prediction models
- URL: http://arxiv.org/abs/2509.14801v1
- Date: Thu, 18 Sep 2025 09:56:16 GMT
- Title: STEP: Structured Training and Evaluation Platform for benchmarking trajectory prediction models
- Authors: Julian F. Schumann, Anna Mészáros, Jens Kober, Arkady Zgonnikov,
- Abstract summary: We introduce STEP -- a new benchmarking framework that addresses limitations by providing a unified interface for multiple datasets.<n>We demonstrate the capabilities of STEP in a number of experiments which reveal 1) the limitations of widely-used testing procedures, 2) the importance of joint modeling of agents for better predictions of interactions, and 3) the vulnerability of current state-of-the-art models against both distribution shifts and targeted attacks by adversarial agents.
- Score: 7.927039780654076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While trajectory prediction plays a critical role in enabling safe and effective path-planning in automated vehicles, standardized practices for evaluating such models remain underdeveloped. Recent efforts have aimed to unify dataset formats and model interfaces for easier comparisons, yet existing frameworks often fall short in supporting heterogeneous traffic scenarios, joint prediction models, or user documentation. In this work, we introduce STEP -- a new benchmarking framework that addresses these limitations by providing a unified interface for multiple datasets, enforcing consistent training and evaluation conditions, and supporting a wide range of prediction models. We demonstrate the capabilities of STEP in a number of experiments which reveal 1) the limitations of widely-used testing procedures, 2) the importance of joint modeling of agents for better predictions of interactions, and 3) the vulnerability of current state-of-the-art models against both distribution shifts and targeted attacks by adversarial agents. With STEP, we aim to shift the focus from the ``leaderboard'' approach to deeper insights about model behavior and generalization in complex multi-agent settings.
Related papers
- Multi-Layer Confidence Scoring for Detection of Out-of-Distribution Samples, Adversarial Attacks, and In-Distribution Misclassifications [2.4219039094115034]
We introduce Multi-Layer Analysis for Confidence Scoring (MACS)<n>We derive a score applicable for confidence estimation, detecting distributional shifts and adversarial attacks.<n>We achieve performances that surpass the state-of-the-art approaches in our experiments with the VGG16 and ViTb16 models.
arXiv Detail & Related papers (2025-12-22T15:25:10Z) - Merge and Guide: Unifying Model Merging and Guided Decoding for Controllable Multi-Objective Generation [49.98025799046136]
We introduce Merge-And-GuidE, a two-stage framework that leverages model merging for guided decoding.<n>In Stage 1, MAGE resolves a compatibility problem between the guidance and base models.<n>In Stage 2, we merge explicit and implicit value models into a unified guidance proxy, which then steers the decoding of the base model from Stage 1.
arXiv Detail & Related papers (2025-10-04T11:10:07Z) - Beyond Model Ranking: Predictability-Aligned Evaluation for Time Series Forecasting [18.018179328110048]
We introduce a predictability-aligned diagnostic framework grounded in spectral coherence.<n>We provide the first systematic evidence of "predictability drift", demonstrating that a task's forecasting difficulty varies sharply over time.<n>Our evaluation reveals a key architectural trade-off: complex models are superior for low-predictability data, whereas linear models are highly effective on more predictable tasks.
arXiv Detail & Related papers (2025-09-27T02:56:06Z) - A Collaborative Ensemble Framework for CTR Prediction [73.59868761656317]
We propose a novel framework, Collaborative Ensemble Training Network (CETNet), to leverage multiple distinct models.
Unlike naive model scaling, our approach emphasizes diversity and collaboration through collaborative learning.
We validate our framework on three public datasets and a large-scale industrial dataset from Meta.
arXiv Detail & Related papers (2024-11-20T20:38:56Z) - MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - PerturBench: Benchmarking Machine Learning Models for Cellular Perturbation Analysis [14.298235969992877]
We introduce a comprehensive framework for perturbation response modeling in single cells.<n>Our approach includes a modular and user-friendly model development and evaluation platform.<n>We highlight the limitations of widely used models, such as mode collapse.
arXiv Detail & Related papers (2024-08-20T07:40:20Z) - Enhanced Prediction of Multi-Agent Trajectories via Control Inference and State-Space Dynamics [14.694200929205975]
This paper introduces a novel methodology for trajectory forecasting based on state-space dynamic system modeling.
To enhance the precision of state estimations within the dynamic system, the paper also presents a novel modeling technique for control variables.
The proposed approach ingeniously integrates graph neural networks with state-space models, effectively capturing the complexities of multi-agent interactions.
arXiv Detail & Related papers (2024-08-08T08:33:02Z) - Predictive Churn with the Set of Good Models [61.00058053669447]
This paper explores connections between two seemingly unrelated concepts of predictive inconsistency.<n>The first, known as predictive multiplicity, occurs when models that perform similarly produce conflicting predictions for individual samples.<n>The second concept, predictive churn, examines the differences in individual predictions before and after model updates.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Collaborative Uncertainty Benefits Multi-Agent Multi-Modal Trajectory Forecasting [61.02295959343446]
This work first proposes a novel concept, collaborative uncertainty (CU), which models the uncertainty resulting from interaction modules.<n>We build a general CU-aware regression framework with an original permutation-equivariant uncertainty estimator to do both tasks of regression and uncertainty estimation.<n>We apply the proposed framework to current SOTA multi-agent trajectory forecasting systems as a plugin module.
arXiv Detail & Related papers (2022-07-11T21:17:41Z) - LatentFormer: Multi-Agent Transformer-Based Interaction Modeling and
Trajectory Prediction [12.84508682310717]
We propose LatentFormer, a transformer-based model for predicting future vehicle trajectories.
We evaluate the proposed method on the nuScenes benchmark dataset and show that our approach achieves state-of-the-art performance and improves upon trajectory metrics by up to 40%.
arXiv Detail & Related papers (2022-03-03T17:44:58Z) - Collaborative Uncertainty in Multi-Agent Trajectory Forecasting [35.013892666040846]
We propose a novel concept, collaborative uncertainty(CU), which models the uncertainty resulting from the interaction module.
We build a general CU-based framework to make a prediction model to learn the future trajectory and the corresponding uncertainty.
In each case, we conduct extensive experiments on two synthetic datasets and two public, large-scale benchmarks of trajectory forecasting.
arXiv Detail & Related papers (2021-10-26T18:27:22Z) - SMART: Simultaneous Multi-Agent Recurrent Trajectory Prediction [72.37440317774556]
We propose advances that address two key challenges in future trajectory prediction.
multimodality in both training data and predictions and constant time inference regardless of number of agents.
arXiv Detail & Related papers (2020-07-26T08:17:10Z) - Estimating the Effects of Continuous-valued Interventions using
Generative Adversarial Networks [103.14809802212535]
We build on the generative adversarial networks (GANs) framework to address the problem of estimating the effect of continuous-valued interventions.
Our model, SCIGAN, is flexible and capable of simultaneously estimating counterfactual outcomes for several different continuous interventions.
To address the challenges presented by shifting to continuous interventions, we propose a novel architecture for our discriminator.
arXiv Detail & Related papers (2020-02-27T18:46:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.