Towards Generalizable and Interpretable Motion Prediction: A Deep
Variational Bayes Approach
- URL: http://arxiv.org/abs/2403.06086v1
- Date: Sun, 10 Mar 2024 04:16:04 GMT
- Title: Towards Generalizable and Interpretable Motion Prediction: A Deep
Variational Bayes Approach
- Authors: Juanwu Lu, Wei Zhan, Masayoshi Tomizuka, Yeping Hu
- Abstract summary: This paper proposes an interpretable generative model for motion prediction with robust generalizability to out-of-distribution cases.
For interpretability, the model achieves the target-driven motion prediction by estimating the spatial distribution of long-term destinations.
Experiments on motion prediction datasets validate that the fitted model can be interpretable and generalizable.
- Score: 54.429396802848224
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Estimating the potential behavior of the surrounding human-driven vehicles is
crucial for the safety of autonomous vehicles in a mixed traffic flow. Recent
state-of-the-art achieved accurate prediction using deep neural networks.
However, these end-to-end models are usually black boxes with weak
interpretability and generalizability. This paper proposes the Goal-based
Neural Variational Agent (GNeVA), an interpretable generative model for motion
prediction with robust generalizability to out-of-distribution cases. For
interpretability, the model achieves target-driven motion prediction by
estimating the spatial distribution of long-term destinations with a
variational mixture of Gaussians. We identify a causal structure among maps and
agents' histories and derive a variational posterior to enhance
generalizability. Experiments on motion prediction datasets validate that the
fitted model can be interpretable and generalizable and can achieve comparable
performance to state-of-the-art results.
Related papers
- Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Explainability as statistical inference [29.74336283497203]
We propose a general deep probabilistic model designed to produce interpretable predictions.
The model parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture.
We show experimentally that using multiple imputation provides more reasonable interpretations.
arXiv Detail & Related papers (2022-12-06T16:55:10Z) - Generalizability Analysis of Graph-based Trajectory Predictor with
Vectorized Representation [29.623692599892365]
Trajectory prediction is one of the essential tasks for autonomous vehicles.
Recent progress in machine learning gave birth to a series of advanced trajectory prediction algorithms.
arXiv Detail & Related papers (2022-08-06T20:19:52Z) - Online Adaptation of Neural Network Models by Modified Extended Kalman
Filter for Customizable and Transferable Driving Behavior Prediction [3.878105750489657]
Behavior prediction of human drivers is crucial for efficient and safe deployment of autonomous vehicles.
In this paper, we apply a $tau$-step modified Extended Kalman Filter parameter adaptation algorithm to the driving behavior prediction task.
With the feedback of the observed trajectory, the algorithm is applied to improve the performance of driving behavior predictions across different human subjects and scenarios.
arXiv Detail & Related papers (2021-12-09T05:39:21Z) - Transforming Autoregression: Interpretable and Expressive Time Series
Forecast [0.0]
We propose Autoregressive Transformation Models (ATMs), a model class inspired from various research directions.
ATMs unite expressive distributional forecasts using a semi-parametric distribution assumption with an interpretable model specification.
We demonstrate the properties of ATMs both theoretically and through empirical evaluation on several simulated and real-world forecasting datasets.
arXiv Detail & Related papers (2021-10-15T17:58:49Z) - HYPER: Learned Hybrid Trajectory Prediction via Factored Inference and
Adaptive Sampling [27.194900145235007]
We introduce HYPER, a general and expressive hybrid prediction framework.
By modeling traffic agents as a hybrid discrete-continuous system, our approach is capable of predicting discrete intent changes over time.
We train and validate our model on the Argoverse dataset, and demonstrate its effectiveness through comprehensive ablation studies and comparisons with state-of-the-art models.
arXiv Detail & Related papers (2021-10-05T20:20:10Z) - Hybrid Physics and Deep Learning Model for Interpretable Vehicle State
Prediction [75.1213178617367]
We propose a hybrid approach combining deep learning and physical motion models.
We achieve interpretability by restricting the output range of the deep neural network as part of the hybrid model.
The results show that our hybrid model can improve model interpretability with no decrease in accuracy compared to existing deep learning approaches.
arXiv Detail & Related papers (2021-03-11T15:21:08Z) - Probabilistic electric load forecasting through Bayesian Mixture Density
Networks [70.50488907591463]
Probabilistic load forecasting (PLF) is a key component in the extended tool-chain required for efficient management of smart energy grids.
We propose a novel PLF approach, framed on Bayesian Mixture Density Networks.
To achieve reliable and computationally scalable estimators of the posterior distributions, both Mean Field variational inference and deep ensembles are integrated.
arXiv Detail & Related papers (2020-12-23T16:21:34Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.