PrognoseNet: A Generative Probabilistic Framework for Multimodal
Position Prediction given Context Information
- URL: http://arxiv.org/abs/2010.00802v1
- Date: Fri, 2 Oct 2020 06:13:41 GMT
- Title: PrognoseNet: A Generative Probabilistic Framework for Multimodal
Position Prediction given Context Information
- Authors: Thomas Kurbiel, Akash Sachdeva, Kun Zhao and Markus Buehren
- Abstract summary: We propose an approach which reformulates the prediction problem as a classification task, allowing for powerful tools.
A smart choice of the latent variable allows for the reformulation of the log-likelihood function as a combination of a classification problem and a much simplified regression problem.
The proposed approach can easily incorporate context information and does not require any preprocessing of the data.
- Score: 2.5302126831371226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to predict multiple possible future positions of the ego-vehicle
given the surrounding context while also estimating their probabilities is key
to safe autonomous driving. Most of the current state-of-the-art Deep Learning
approaches are trained on trajectory data to achieve this task. However
trajectory data captured by sensor systems is highly imbalanced, since by far
most of the trajectories follow straight lines with an approximately constant
velocity. This poses a huge challenge for the task of predicting future
positions, which is inherently a regression problem. Current state-of-the-art
approaches alleviate this problem only by major preprocessing of the training
data, e.g. resampling, clustering into anchors etc. In this paper we propose an
approach which reformulates the prediction problem as a classification task,
allowing for powerful tools, e.g. focal loss, to combat the imbalance. To this
end we design a generative probabilistic model consisting of a deep neural
network with a Mixture of Gaussian head. A smart choice of the latent variable
allows for the reformulation of the log-likelihood function as a combination of
a classification problem and a much simplified regression problem. The output
of our model is an estimate of the probability density function of future
positions, hence allowing for prediction of multiple possible positions while
also estimating their probabilities. The proposed approach can easily
incorporate context information and does not require any preprocessing of the
data.
Related papers
- A Rate-Distortion View of Uncertainty Quantification [36.85921945174863]
In supervised learning, understanding an input's proximity to the training data can help a model decide whether it has sufficient evidence for reaching a reliable prediction.
We introduce Distance Aware Bottleneck (DAB), a new method for enriching deep neural networks with this property.
arXiv Detail & Related papers (2024-06-16T01:33:22Z) - Investigating Low Data, Confidence Aware Image Prediction on Smooth Repetitive Videos using Gaussian Processes [25.319133815064557]
We focus on the problem of predicting future images of an image sequence with interpretable confidence bounds from very little training data.
We generate probability distributions over sequentially predicted images, and propagate uncertainty through time to generate a confidence metric for our predictions.
We showcase the capabilities of our approach on real world data by predicting pedestrian flows and weather patterns from satellite imagery.
arXiv Detail & Related papers (2023-07-20T22:35:27Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - PreTR: Spatio-Temporal Non-Autoregressive Trajectory Prediction
Transformer [0.9786690381850356]
We introduce a model called PRediction Transformer (PReTR) that extracts features from the multi-agent scenes by employing a factorized-temporal attention module.
It shows less computational needs than previously studied models with empirically better results.
We leverage encoder-decoder Transformer networks for parallel decoding a set of learned object queries.
arXiv Detail & Related papers (2022-03-17T12:52:23Z) - Uncertainty Modeling for Out-of-Distribution Generalization [56.957731893992495]
We argue that the feature statistics can be properly manipulated to improve the generalization ability of deep learning models.
Common methods often consider the feature statistics as deterministic values measured from the learned features.
We improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training.
arXiv Detail & Related papers (2022-02-08T16:09:12Z) - Backward-Compatible Prediction Updates: A Probabilistic Approach [12.049279991559091]
We formalize the Prediction Update Problem and present an efficient probabilistic approach as answer to the above questions.
In extensive experiments on standard classification benchmark data sets, we show that our method outperforms alternative strategies for backward-compatible prediction updates.
arXiv Detail & Related papers (2021-07-02T13:05:31Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Deep transformation models: Tackling complex regression problems with
neural network based transformation models [0.0]
We present a deep transformation model for probabilistic regression.
It estimates the whole conditional probability distribution, which is the most thorough way to capture uncertainty about the outcome.
Our method works for complex input data, which we demonstrate by employing a CNN architecture on image data.
arXiv Detail & Related papers (2020-04-01T14:23:12Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z) - Value-driven Hindsight Modelling [68.658900923595]
Value estimation is a critical component of the reinforcement learning (RL) paradigm.
Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function.
We develop an approach for representation learning in RL that sits in between these two extremes.
This provides tractable prediction targets that are directly relevant for a task, and can thus accelerate learning the value function.
arXiv Detail & Related papers (2020-02-19T18:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.