Online Adaptation of Neural Network Models by Modified Extended Kalman
Filter for Customizable and Transferable Driving Behavior Prediction
- URL: http://arxiv.org/abs/2112.06129v1
- Date: Thu, 9 Dec 2021 05:39:21 GMT
- Title: Online Adaptation of Neural Network Models by Modified Extended Kalman
Filter for Customizable and Transferable Driving Behavior Prediction
- Authors: Letian Wang, Yeping Hu, Changliu Liu
- Abstract summary: Behavior prediction of human drivers is crucial for efficient and safe deployment of autonomous vehicles.
In this paper, we apply a $tau$-step modified Extended Kalman Filter parameter adaptation algorithm to the driving behavior prediction task.
With the feedback of the observed trajectory, the algorithm is applied to improve the performance of driving behavior predictions across different human subjects and scenarios.
- Score: 3.878105750489657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High fidelity behavior prediction of human drivers is crucial for efficient
and safe deployment of autonomous vehicles, which is challenging due to the
stochasticity, heterogeneity, and time-varying nature of human behaviors. On
one hand, the trained prediction model can only capture the motion pattern in
an average sense, while the nuances among individuals can hardly be reflected.
On the other hand, the prediction model trained on the training set may not
generalize to the testing set which may be in a different scenario or data
distribution, resulting in low transferability and generalizability. In this
paper, we applied a $\tau$-step modified Extended Kalman Filter parameter
adaptation algorithm (MEKF$_\lambda$) to the driving behavior prediction task,
which has not been studied before in literature. With the feedback of the
observed trajectory, the algorithm is applied to neural-network-based models to
improve the performance of driving behavior predictions across different human
subjects and scenarios. A new set of metrics is proposed for systematic
evaluation of online adaptation performance in reducing the prediction error
for different individuals and scenarios. Empirical studies on the best layer in
the model and steps of observation to adapt are also provided.
Related papers
- Towards Generalizable and Interpretable Motion Prediction: A Deep
Variational Bayes Approach [54.429396802848224]
This paper proposes an interpretable generative model for motion prediction with robust generalizability to out-of-distribution cases.
For interpretability, the model achieves the target-driven motion prediction by estimating the spatial distribution of long-term destinations.
Experiments on motion prediction datasets validate that the fitted model can be interpretable and generalizable.
arXiv Detail & Related papers (2024-03-10T04:16:04Z) - A Bayesian approach to quantifying uncertainties and improving
generalizability in traffic prediction models [0.0]
We propose a Bayesian recurrent neural network framework for uncertainty in traffic prediction with higher generalizability.
We show that normalization alters the training process of deep neural networks by controlling the model's complexity.
Our findings are especially relevant to traffic management applications, where predicting traffic conditions across multiple locations is the goal.
arXiv Detail & Related papers (2023-07-12T06:23:31Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Meta-Auxiliary Learning for Adaptive Human Pose Prediction [26.877194503491072]
Predicting high-fidelity future human poses is decisive for intelligent robots to interact with humans.
Deep end-to-end learning approaches, which typically train a generic pre-trained model on external datasets and then directly apply it to all test samples, remain non-optimal.
We propose a novel test-time adaptation framework that leverages two self-supervised auxiliary tasks to help the primary forecasting network adapt to the test sequence.
arXiv Detail & Related papers (2023-04-13T11:17:09Z) - Model Predictive Control with Gaussian-Process-Supported Dynamical
Constraints for Autonomous Vehicles [82.65261980827594]
We propose a model predictive control approach for autonomous vehicles that exploits learned Gaussian processes for predicting human driving behavior.
A multi-mode predictive control approach considers the possible intentions of the human drivers.
arXiv Detail & Related papers (2023-03-08T17:14:57Z) - Modeling Uncertain Feature Representation for Domain Generalization [49.129544670700525]
We show that our method consistently improves the network generalization ability on multiple vision tasks.
Our methods are simple yet effective and can be readily integrated into networks without additional trainable parameters or loss constraints.
arXiv Detail & Related papers (2023-01-16T14:25:02Z) - Hybrid Physics and Deep Learning Model for Interpretable Vehicle State
Prediction [75.1213178617367]
We propose a hybrid approach combining deep learning and physical motion models.
We achieve interpretability by restricting the output range of the deep neural network as part of the hybrid model.
The results show that our hybrid model can improve model interpretability with no decrease in accuracy compared to existing deep learning approaches.
arXiv Detail & Related papers (2021-03-11T15:21:08Z) - A comprehensive study on the prediction reliability of graph neural
networks for virtual screening [0.0]
We investigate the effects of model architectures, regularization methods, and loss functions on the prediction performance and reliability of classification results.
Our result highlights that correct choice of regularization and inference methods is evidently important to achieve high success rate.
arXiv Detail & Related papers (2020-03-17T10:13:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.