Propagating State Uncertainty Through Trajectory Forecasting
- URL: http://arxiv.org/abs/2110.03267v2
- Date: Fri, 8 Oct 2021 00:47:03 GMT
- Title: Propagating State Uncertainty Through Trajectory Forecasting
- Authors: Boris Ivanovic, Yifeng Lin, Shubham Shrivastava, Punarjay Chakravarty,
Marco Pavone
- Abstract summary: Trajectory forecasting is surrounded by uncertainty as its inputs are produced by (noisy) upstream perception.
Most trajectory forecasting methods do not account for upstream uncertainty, instead taking only the most-likely values.
We present a novel method for incorporating perceptual state uncertainty in trajectory forecasting, a key component of which is a new statistical distance-based loss function.
- Score: 34.53847097769489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainty pervades through the modern robotic autonomy stack, with nearly
every component (e.g., sensors, detection, classification, tracking, behavior
prediction) producing continuous or discrete probabilistic distributions.
Trajectory forecasting, in particular, is surrounded by uncertainty as its
inputs are produced by (noisy) upstream perception and its outputs are
predictions that are often probabilistic for use in downstream planning.
However, most trajectory forecasting methods do not account for upstream
uncertainty, instead taking only the most-likely values. As a result,
perceptual uncertainties are not propagated through forecasting and predictions
are frequently overconfident. To address this, we present a novel method for
incorporating perceptual state uncertainty in trajectory forecasting, a key
component of which is a new statistical distance-based loss function which
encourages predicting uncertainties that better match upstream perception. We
evaluate our approach both in illustrative simulations and on large-scale,
real-world data, demonstrating its efficacy in propagating perceptual state
uncertainty through prediction and producing more calibrated predictions.
Related papers
- Calibrated Probabilistic Forecasts for Arbitrary Sequences [58.54729945445505]
Real-world data streams can change unpredictably due to distribution shifts, feedback loops and adversarial actors.
We present a forecasting framework ensuring valid uncertainty estimates regardless of how data evolves.
arXiv Detail & Related papers (2024-09-27T21:46:42Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2023-10-17T20:30:16Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Pedestrian Trajectory Forecasting Using Deep Ensembles Under Sensing
Uncertainty [125.41260574344933]
We consider an encoder-decoder based deep ensemble network for capturing both perception and predictive uncertainty simultaneously.
Overall, deep ensembles provided more robust predictions and the consideration of upstream uncertainty further increased the estimation accuracy for the model.
arXiv Detail & Related papers (2023-05-26T04:27:48Z) - Creating Probabilistic Forecasts from Arbitrary Deterministic Forecasts
using Conditional Invertible Neural Networks [0.19573380763700712]
We use a conditional Invertible Neural Network (cINN) to learn the underlying distribution of the data and then combine the uncertainty from this distribution with an arbitrary deterministic forecast.
Our approach enables the simple creation of probabilistic forecasts without complicated statistical loss functions or further assumptions.
arXiv Detail & Related papers (2023-02-03T15:11:39Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Robust uncertainty estimates with out-of-distribution pseudo-inputs
training [0.0]
We propose to explicitly train the uncertainty predictor where we are not given data to make it reliable.
As one cannot train without data, we provide mechanisms for generating pseudo-inputs in informative low-density regions of the input space.
With a holistic evaluation, we demonstrate that this yields robust and interpretable predictions of uncertainty while retaining state-of-the-art performance on diverse tasks.
arXiv Detail & Related papers (2022-01-15T17:15:07Z) - Evaluation of Machine Learning Techniques for Forecast Uncertainty
Quantification [0.13999481573773068]
Ensemble forecasting is, so far, the most successful approach to produce relevant forecasts along with an estimation of their uncertainty.
Main limitations of ensemble forecasting are the high computational cost and the difficulty to capture and quantify different sources of uncertainty.
In this work proof-of-concept model experiments are conducted to examine the performance of ANNs trained to predict a corrected state of the system and the state uncertainty using only a single deterministic forecast as input.
arXiv Detail & Related papers (2021-11-29T16:52:17Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.