Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction
- URL: http://arxiv.org/abs/2211.08701v1
- Date: Wed, 16 Nov 2022 06:28:20 GMT
- Title: Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction
- Authors: Masha Itkina and Mykel J. Kochenderfer
- Abstract summary: We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
- Score: 50.79827516897913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although neural networks have seen tremendous success as predictive models in
a variety of domains, they can be overly confident in their predictions on
out-of-distribution (OOD) data. To be viable for safety-critical applications,
like autonomous vehicles, neural networks must accurately estimate their
epistemic or model uncertainty, achieving a level of system self-awareness.
Techniques for epistemic uncertainty quantification often require OOD data
during training or multiple neural network forward passes during inference.
These approaches may not be suitable for real-time performance on
high-dimensional inputs. Furthermore, existing methods lack interpretability of
the estimated uncertainty, which limits their usefulness both to engineers for
further system development and to downstream modules in the autonomy stack. We
propose the use of evidential deep learning to estimate the epistemic
uncertainty over a low-dimensional, interpretable latent space in a trajectory
prediction setting. We introduce an interpretable paradigm for trajectory
prediction that distributes the uncertainty among the semantic concepts: past
agent behavior, road structure, and social context. We validate our approach on
real-world autonomous driving data, demonstrating superior performance over
state-of-the-art baselines. Our code is available at:
https://github.com/sisl/InterpretableSelfAwarePrediction.
Related papers
- Reliable Probabilistic Human Trajectory Prediction for Autonomous Applications [1.8294777056635267]
Vehicle systems need reliable, accurate, fast, resource-efficient, scalable, and low-latency trajectory predictions.
This paper presents a lightweight method to address these requirements, combining Long Short-Term Memory and Mixture Density Networks.
We discuss essential requirements for human trajectory prediction in autonomous vehicle applications and demonstrate our method's performance using traffic-related datasets.
arXiv Detail & Related papers (2024-10-09T14:08:39Z) - GRANP: A Graph Recurrent Attentive Neural Process Model for Vehicle Trajectory Prediction [3.031375888004876]
We propose a novel model named Graph Recurrent Attentive Neural Process (GRANP) for vehicle trajectory prediction.
GRANP contains an encoder with deterministic and latent paths, and a decoder for prediction.
We show that GRANP achieves state-of-the-art results and can efficiently quantify uncertainties.
arXiv Detail & Related papers (2024-04-09T05:51:40Z) - Interpretable Goal-Based model for Vehicle Trajectory Prediction in
Interactive Scenarios [4.1665957033942105]
Social interaction between a vehicle and its surroundings is critical for road safety in autonomous driving.
We propose a neural network-based model for the task of vehicle trajectory prediction in an interactive environment.
We implement and evaluate our model using the INTERACTION dataset and demonstrate the effectiveness of our proposed architecture.
arXiv Detail & Related papers (2023-08-08T15:00:12Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Learning Uncertainty with Artificial Neural Networks for Improved
Predictive Process Monitoring [0.114219428942199]
We distinguish two types of learnable uncertainty: model uncertainty due to a lack of training data and noise-induced observational uncertainty.
Our contribution is to apply these uncertainty concepts to predictive process monitoring tasks to train uncertainty-based models to predict the remaining time and outcomes.
arXiv Detail & Related papers (2022-06-13T17:05:27Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Robust uncertainty estimates with out-of-distribution pseudo-inputs
training [0.0]
We propose to explicitly train the uncertainty predictor where we are not given data to make it reliable.
As one cannot train without data, we provide mechanisms for generating pseudo-inputs in informative low-density regions of the input space.
With a holistic evaluation, we demonstrate that this yields robust and interpretable predictions of uncertainty while retaining state-of-the-art performance on diverse tasks.
arXiv Detail & Related papers (2022-01-15T17:15:07Z) - Interpretable Social Anchors for Human Trajectory Forecasting in Crowds [84.20437268671733]
We propose a neural network-based system to predict human trajectory in crowds.
We learn interpretable rule-based intents, and then utilise the expressibility of neural networks to model scene-specific residual.
Our architecture is tested on the interaction-centric benchmark TrajNet++.
arXiv Detail & Related papers (2021-05-07T09:22:34Z) - Out-of-Distribution Detection for Automotive Perception [58.34808836642603]
Neural networks (NNs) are widely used for object classification in autonomous driving.
NNs can fail on input data not well represented by the training dataset, known as out-of-distribution (OOD) data.
This paper presents a method for determining whether inputs are OOD, which does not require OOD data during training and does not increase the computational cost of inference.
arXiv Detail & Related papers (2020-11-03T01:46:35Z) - DSDNet: Deep Structured self-Driving Network [92.9456652486422]
We propose the Deep Structured self-Driving Network (DSDNet), which performs object detection, motion prediction, and motion planning with a single neural network.
We develop a deep structured energy based model which considers the interactions between actors and produces socially consistent multimodal future predictions.
arXiv Detail & Related papers (2020-08-13T17:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.