Comparison of Uncertainty Quantification with Deep Learning in Time
Series Regression
- URL: http://arxiv.org/abs/2211.06233v1
- Date: Fri, 11 Nov 2022 14:29:13 GMT
- Title: Comparison of Uncertainty Quantification with Deep Learning in Time
Series Regression
- Authors: Levente Foldesi and Matias Valdenegro-Toro
- Abstract summary: In this paper, different uncertainty estimation methods are compared to forecast meteorological time series data.
Results show how each uncertainty estimation method performs on the forecasting task.
- Score: 7.6146285961466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Increasingly high-stakes decisions are made using neural networks in order to
make predictions. Specifically, meteorologists and hedge funds apply these
techniques to time series data. When it comes to prediction, there are certain
limitations for machine learning models (such as lack of expressiveness,
vulnerability of domain shifts and overconfidence) which can be solved using
uncertainty estimation. There is a set of expectations regarding how
uncertainty should ``behave". For instance, a wider prediction horizon should
lead to more uncertainty or the model's confidence should be proportional to
its accuracy. In this paper, different uncertainty estimation methods are
compared to forecast meteorological time series data and evaluate these
expectations. The results show how each uncertainty estimation method performs
on the forecasting task, which partially evaluates the robustness of predicted
uncertainty.
Related papers
- Efficient Normalized Conformal Prediction and Uncertainty Quantification
for Anti-Cancer Drug Sensitivity Prediction with Deep Regression Forests [0.0]
Conformal Prediction has emerged as a promising method to pair machine learning models with prediction intervals.
We propose a method to estimate the uncertainty of each sample by calculating the variance obtained from a Deep Regression Forest.
arXiv Detail & Related papers (2024-02-21T19:09:53Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Pedestrian Trajectory Forecasting Using Deep Ensembles Under Sensing
Uncertainty [125.41260574344933]
We consider an encoder-decoder based deep ensemble network for capturing both perception and predictive uncertainty simultaneously.
Overall, deep ensembles provided more robust predictions and the consideration of upstream uncertainty further increased the estimation accuracy for the model.
arXiv Detail & Related papers (2023-05-26T04:27:48Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Evaluation of Machine Learning Techniques for Forecast Uncertainty
Quantification [0.13999481573773068]
Ensemble forecasting is, so far, the most successful approach to produce relevant forecasts along with an estimation of their uncertainty.
Main limitations of ensemble forecasting are the high computational cost and the difficulty to capture and quantify different sources of uncertainty.
In this work proof-of-concept model experiments are conducted to examine the performance of ANNs trained to predict a corrected state of the system and the state uncertainty using only a single deterministic forecast as input.
arXiv Detail & Related papers (2021-11-29T16:52:17Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Quantifying Uncertainty in Deep Spatiotemporal Forecasting [67.77102283276409]
We describe two types of forecasting problems: regular grid-based and graph-based.
We analyze UQ methods from both the Bayesian and the frequentist point view, casting in a unified framework via statistical decision theory.
Through extensive experiments on real-world road network traffic, epidemics, and air quality forecasting tasks, we reveal the statistical computational trade-offs for different UQ methods.
arXiv Detail & Related papers (2021-05-25T14:35:46Z) - Exploring Uncertainty in Deep Learning for Construction of Prediction
Intervals [27.569681578957645]
We explore the uncertainty in deep learning to construct prediction intervals.
We design a special loss function, which enables us to learn uncertainty without uncertainty label.
Our method correlates the construction of prediction intervals with the uncertainty estimation.
arXiv Detail & Related papers (2021-04-27T02:58:20Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.