Robustness Benchmark of Road User Trajectory Prediction Models for
Automated Driving
- URL: http://arxiv.org/abs/2304.01895v1
- Date: Tue, 4 Apr 2023 15:47:42 GMT
- Title: Robustness Benchmark of Road User Trajectory Prediction Models for
Automated Driving
- Authors: Manuel Mu\~noz S\'anchez, Emilia Silvas, Jos Elfring, Ren\'e van de
Molengraft
- Abstract summary: We benchmark machine learning models against perturbations that simulate functional insufficiencies observed during model deployment in a vehicle.
Training the models with similar perturbations effectively reduces performance degradation, with error increases of up to +87.5%.
We argue that despite being an effective mitigation strategy, data augmentation through perturbations during training does not guarantee robustness towards unforeseen perturbations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate and robust trajectory predictions of road users are needed to enable
safe automated driving. To do this, machine learning models are often used,
which can show erratic behavior when presented with previously unseen inputs.
In this work, two environment-aware models (MotionCNN and MultiPath++) and two
common baselines (Constant Velocity and an LSTM) are benchmarked for robustness
against various perturbations that simulate functional insufficiencies observed
during model deployment in a vehicle: unavailability of road information, late
detections, and noise. Results show significant performance degradation under
the presence of these perturbations, with errors increasing up to +1444.8\% in
commonly used trajectory prediction evaluation metrics. Training the models
with similar perturbations effectively reduces performance degradation, with
error increases of up to +87.5\%. We argue that despite being an effective
mitigation strategy, data augmentation through perturbations during training
does not guarantee robustness towards unforeseen perturbations, since
identification of all possible on-road complications is unfeasible.
Furthermore, degrading the inputs sometimes leads to more accurate predictions,
suggesting that the models are unable to learn the true relationships between
the different elements in the data.
Related papers
- FollowGen: A Scaled Noise Conditional Diffusion Model for Car-Following Trajectory Prediction [9.2729178775419]
This study introduces a scaled noise conditional diffusion model for car-following trajectory prediction.
It integrates detailed inter-vehicular interactions and car-following dynamics into a generative framework, improving the accuracy and plausibility of predicted trajectories.
Experimental results on diverse real-world driving scenarios demonstrate the state-of-the-art performance and robustness of the proposed method.
arXiv Detail & Related papers (2024-11-23T23:13:45Z) - Uncertainty-aware Human Mobility Modeling and Anomaly Detection [28.311683535974634]
We study how to model human agents' mobility behavior toward effective anomaly detection.
We use GPS data as a sequence stay-point events, each with a set of characterizingtemporal features.
Experiments on large expert-simulated datasets with tens of thousands of agents demonstrate the effectiveness of our model.
arXiv Detail & Related papers (2024-10-02T06:57:08Z) - Guiding Attention in End-to-End Driving Models [49.762868784033785]
Vision-based end-to-end driving models trained by imitation learning can lead to affordable solutions for autonomous driving.
We study how to guide the attention of these models to improve their driving quality by adding a loss term during training.
In contrast to previous work, our method does not require these salient semantic maps to be available during testing time.
arXiv Detail & Related papers (2024-04-30T23:18:51Z) - Hacking Predictors Means Hacking Cars: Using Sensitivity Analysis to Identify Trajectory Prediction Vulnerabilities for Autonomous Driving Security [1.949927790632678]
In this paper, we conduct a sensitivity analysis on two trajectory prediction models, Trajectron++ and AgentFormer.
The analysis reveals that between all inputs, almost all of the perturbation sensitivities for both models lie only within the most recent position and velocity states.
We additionally demonstrate that, despite dominant sensitivity on state history perturbations, an undetectable image map perturbation can induce large prediction error increases in both models.
arXiv Detail & Related papers (2024-01-18T18:47:29Z) - RACER: Rational Artificial Intelligence Car-following-model Enhanced by
Reality [51.244807332133696]
This paper introduces RACER, a cutting-edge deep learning car-following model to predict Adaptive Cruise Control (ACC) driving behavior.
Unlike conventional models, RACER effectively integrates Rational Driving Constraints (RDCs), crucial tenets of actual driving.
RACER excels across key metrics, such as acceleration, velocity, and spacing, registering zero violations.
arXiv Detail & Related papers (2023-12-12T06:21:30Z) - Seeing is not Believing: Robust Reinforcement Learning against Spurious
Correlation [57.351098530477124]
We consider one critical type of robustness against spurious correlation, where different portions of the state do not have correlations induced by unobserved confounders.
A model that learns such useless or even harmful correlation could catastrophically fail when the confounder in the test case deviates from the training one.
Existing robust algorithms that assume simple and unstructured uncertainty sets are therefore inadequate to address this challenge.
arXiv Detail & Related papers (2023-07-15T23:53:37Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Benchmark for Models Predicting Human Behavior in Gap Acceptance
Scenarios [4.801975818473341]
We develop a framework facilitating the evaluation of any model, by any metric, and in any scenario.
We then apply this framework to state-of-the-art prediction models, which all show themselves to be unreliable in the most safety-critical situations.
arXiv Detail & Related papers (2022-11-10T09:59:38Z) - Robust Trajectory Prediction against Adversarial Attacks [84.10405251683713]
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving systems.
These methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions.
In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks.
arXiv Detail & Related papers (2022-07-29T22:35:05Z) - CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal
Relationships [8.679073301435265]
We construct a new benchmark for evaluating and improving model robustness by applying perturbations to existing data.
We use these labels to perturb the data by deleting non-causal agents from the scene.
Under non-causal perturbations, we observe a $25$-$38%$ relative change in minADE as compared to the original.
arXiv Detail & Related papers (2022-07-07T21:28:23Z) - Tracking the risk of a deployed model and detecting harmful distribution
shifts [105.27463615756733]
In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model does not degrade substantially.
We argue that a sensible method for firing off a warning has to both (a) detect harmful shifts while ignoring benign ones, and (b) allow continuous monitoring of model performance without increasing the false alarm rate.
arXiv Detail & Related papers (2021-10-12T17:21:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.