Interpretable Social Anchors for Human Trajectory Forecasting in Crowds
- URL: http://arxiv.org/abs/2105.03136v1
- Date: Fri, 7 May 2021 09:22:34 GMT
- Title: Interpretable Social Anchors for Human Trajectory Forecasting in Crowds
- Authors: Parth Kothari, Brian Sifringer and Alexandre Alahi
- Abstract summary: We propose a neural network-based system to predict human trajectory in crowds.
We learn interpretable rule-based intents, and then utilise the expressibility of neural networks to model scene-specific residual.
Our architecture is tested on the interaction-centric benchmark TrajNet++.
- Score: 84.20437268671733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human trajectory forecasting in crowds, at its core, is a sequence prediction
problem with specific challenges of capturing inter-sequence dependencies
(social interactions) and consequently predicting socially-compliant multimodal
distributions. In recent years, neural network-based methods have been shown to
outperform hand-crafted methods on distance-based metrics. However, these
data-driven methods still suffer from one crucial limitation: lack of
interpretability. To overcome this limitation, we leverage the power of
discrete choice models to learn interpretable rule-based intents, and
subsequently utilise the expressibility of neural networks to model
scene-specific residual. Extensive experimentation on the interaction-centric
benchmark TrajNet++ demonstrates the effectiveness of our proposed architecture
to explain its predictions without compromising the accuracy.
Related papers
- Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Interpretable Goal-Based model for Vehicle Trajectory Prediction in
Interactive Scenarios [4.1665957033942105]
Social interaction between a vehicle and its surroundings is critical for road safety in autonomous driving.
We propose a neural network-based model for the task of vehicle trajectory prediction in an interactive environment.
We implement and evaluate our model using the INTERACTION dataset and demonstrate the effectiveness of our proposed architecture.
arXiv Detail & Related papers (2023-08-08T15:00:12Z) - Multiple-level Point Embedding for Solving Human Trajectory Imputation
with Prediction [7.681950806902859]
Sparsity is a common issue in many trajectory datasets, including human mobility data.
This work plans to explore whether the learning process of imputation and prediction could benefit from each other to achieve better outcomes.
arXiv Detail & Related papers (2023-01-11T14:13:23Z) - DANLIP: Deep Autoregressive Networks for Locally Interpretable
Probabilistic Forecasting [0.0]
We propose a novel deep learning-based probabilistic time series forecasting architecture that is intrinsically interpretable.
We show that our model is not only interpretable but also provides comparable performance to state-of-the-art probabilistic time series forecasting methods.
arXiv Detail & Related papers (2023-01-05T23:40:23Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Cross-Validation and Uncertainty Determination for Randomized Neural
Networks with Applications to Mobile Sensors [0.0]
Extreme learning machines provide an attractive and efficient method for supervised learning under limited computing ressources and green machine learning.
Results are discussed about supervised learning with such networks and regression methods in terms of consistency and bounds for the generalization and prediction error.
arXiv Detail & Related papers (2021-01-06T12:28:06Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Depth Uncertainty in Neural Networks [2.6763498831034043]
Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes.
By exploiting the sequential structure of feed-forward networks, we are able to both evaluate our training objective and make predictions with a single forward pass.
We validate our approach on real-world regression and image classification tasks.
arXiv Detail & Related papers (2020-06-15T14:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.