Human Trajectory Forecasting with Explainable Behavioral Uncertainty
- URL: http://arxiv.org/abs/2307.01817v1
- Date: Tue, 4 Jul 2023 16:45:21 GMT
- Title: Human Trajectory Forecasting with Explainable Behavioral Uncertainty
- Authors: Jiangbei Yue, Dinesh Manocha and He Wang
- Abstract summary: Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
- Score: 63.62824628085961
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human trajectory forecasting helps to understand and predict human behaviors,
enabling applications from social robots to self-driving cars, and therefore
has been heavily investigated. Most existing methods can be divided into
model-free and model-based methods. Model-free methods offer superior
prediction accuracy but lack explainability, while model-based methods provide
explainability but cannot predict well. Combining both methodologies, we
propose a new Bayesian Neural Stochastic Differential Equation model BNSP-SFM,
where a behavior SDE model is combined with Bayesian neural networks (BNNs).
While the NNs provide superior predictive power, the SDE offers strong
explainability with quantifiable uncertainty in behavior and observation. We
show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy,
compared with 11 state-of-the-art methods. BNSP-SFM also generalizes better to
drastically different scenes with different environments and crowd densities (~
20 times higher than the testing data). Finally, BNSP-SFM can provide
predictions with confidence to better explain potential causes of behaviors.
The code will be released upon acceptance.
Related papers
- SynthTree: Co-supervised Local Model Synthesis for Explainable Prediction [15.832975722301011]
We propose a novel method to enhance explainability with minimal accuracy loss.
We have developed novel methods for estimating nodes by leveraging AI techniques.
Our findings highlight the critical role that statistical methodologies can play in advancing explainable AI.
arXiv Detail & Related papers (2024-06-16T14:43:01Z) - CogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding [62.075029712357]
This work introduces the Cognitive Diffusion Probabilistic Models (CogDPM)
CogDPM features a precision estimation method based on the hierarchical sampling capabilities of diffusion models and weight the guidance with precision weights estimated by the inherent property of diffusion models.
We apply CogDPM to real-world prediction tasks using the United Kindom precipitation and surface wind datasets.
arXiv Detail & Related papers (2024-05-03T15:54:50Z) - Model Predictive Control with Gaussian-Process-Supported Dynamical
Constraints for Autonomous Vehicles [82.65261980827594]
We propose a model predictive control approach for autonomous vehicles that exploits learned Gaussian processes for predicting human driving behavior.
A multi-mode predictive control approach considers the possible intentions of the human drivers.
arXiv Detail & Related papers (2023-03-08T17:14:57Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Human Trajectory Prediction via Neural Social Physics [63.62824628085961]
Trajectory prediction has been widely pursued in many fields, and many model-based and model-free methods have been explored.
We propose a new method combining both methodologies based on a new Neural Differential Equation model.
Our new model (Neural Social Physics or NSP) is a deep neural network within which we use an explicit physics model with learnable parameters.
arXiv Detail & Related papers (2022-07-21T12:11:18Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Calibration and Uncertainty Quantification of Bayesian Convolutional
Neural Networks for Geophysical Applications [0.0]
It is common to incorporate the uncertainty of predictions such subsurface models should provide calibrated probabilities and the associated uncertainties in their predictions.
It has been shown that popular Deep Learning-based models are often miscalibrated, and due to their deterministic nature, provide no means to interpret the uncertainty of their predictions.
We compare three different approaches obtaining probabilistic models based on convolutional neural networks in a Bayesian formalism.
arXiv Detail & Related papers (2021-05-25T17:54:23Z) - Probabilistic solution of chaotic dynamical system inverse problems
using Bayesian Artificial Neural Networks [0.0]
Inverse problems for chaotic systems are numerically challenging.
Small perturbations in model parameters can cause very large changes in estimated forward trajectories.
Bizarre Artificial Neural Networks can be used to simultaneously fit a model and estimate model parameter uncertainty.
arXiv Detail & Related papers (2020-05-26T20:35:02Z) - A comprehensive study on the prediction reliability of graph neural
networks for virtual screening [0.0]
We investigate the effects of model architectures, regularization methods, and loss functions on the prediction performance and reliability of classification results.
Our result highlights that correct choice of regularization and inference methods is evidently important to achieve high success rate.
arXiv Detail & Related papers (2020-03-17T10:13:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.