Evaluation of Machine Learning Techniques for Forecast Uncertainty
Quantification
- URL: http://arxiv.org/abs/2111.14844v1
- Date: Mon, 29 Nov 2021 16:52:17 GMT
- Title: Evaluation of Machine Learning Techniques for Forecast Uncertainty
Quantification
- Authors: Maximiliano A. Sacco, Juan J. Ruiz, Manuel Pulido and Pierre Tandeo
- Abstract summary: Ensemble forecasting is, so far, the most successful approach to produce relevant forecasts along with an estimation of their uncertainty.
Main limitations of ensemble forecasting are the high computational cost and the difficulty to capture and quantify different sources of uncertainty.
In this work proof-of-concept model experiments are conducted to examine the performance of ANNs trained to predict a corrected state of the system and the state uncertainty using only a single deterministic forecast as input.
- Score: 0.13999481573773068
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Producing an accurate weather forecast and a reliable quantification of its
uncertainty is an open scientific challenge. Ensemble forecasting is, so far,
the most successful approach to produce relevant forecasts along with an
estimation of their uncertainty. The main limitations of ensemble forecasting
are the high computational cost and the difficulty to capture and quantify
different sources of uncertainty, particularly those associated with model
errors. In this work proof-of-concept model experiments are conducted to
examine the performance of ANNs trained to predict a corrected state of the
system and the state uncertainty using only a single deterministic forecast as
input. We compare different training strategies: one based on a direct training
using the mean and spread of an ensemble forecast as target, the other ones
rely on an indirect training strategy using a deterministic forecast as target
in which the uncertainty is implicitly learned from the data. For the last
approach two alternative loss functions are proposed and evaluated, one based
on the data observation likelihood and the other one based on a local
estimation of the error. The performance of the networks is examined at
different lead times and in scenarios with and without model errors.
Experiments using the Lorenz'96 model show that the ANNs are able to emulate
some of the properties of ensemble forecasts like the filtering of the most
unpredictable modes and a state-dependent quantification of the forecast
uncertainty. Moreover, ANNs provide a reliable estimation of the forecast
uncertainty in the presence of model error.
Related papers
- Error-Driven Uncertainty Aware Training [7.702016079410588]
Error-Driven Uncertainty Aware Training aims to enhance the ability of neural classifiers to estimate their uncertainty correctly.
The EUAT approach operates during the model's training phase by selectively employing two loss functions depending on whether the training examples are correctly or incorrectly predicted.
We evaluate EUAT using diverse neural models and datasets in the image recognition domains considering both non-adversarial and adversarial settings.
arXiv Detail & Related papers (2024-05-02T11:48:14Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Toward Reliable Human Pose Forecasting with Uncertainty [51.628234388046195]
We develop an open-source library for human pose forecasting, including multiple models, supporting several datasets.
We devise two types of uncertainty in the problem to increase performance and convey better trust.
arXiv Detail & Related papers (2023-04-13T17:56:08Z) - Comparison of Uncertainty Quantification with Deep Learning in Time
Series Regression [7.6146285961466]
In this paper, different uncertainty estimation methods are compared to forecast meteorological time series data.
Results show how each uncertainty estimation method performs on the forecasting task.
arXiv Detail & Related papers (2022-11-11T14:29:13Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Probabilistic Deep Learning to Quantify Uncertainty in Air Quality
Forecasting [5.007231239800297]
This work applies state-of-the-art techniques of uncertainty quantification in a real-world setting of air quality forecasts.
We describe training probabilistic models and evaluate their predictive uncertainties based on empirical performance, reliability of confidence estimate, and practical applicability.
Our experiments demonstrate that the proposed models perform better than previous works in quantifying uncertainty in data-driven air quality forecasts.
arXiv Detail & Related papers (2021-12-05T17:01:18Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Learning Prediction Intervals for Model Performance [1.433758865948252]
We propose a method to compute prediction intervals for model performance.
We evaluate our approach across a wide range of drift conditions and show substantial improvement over competitive baselines.
arXiv Detail & Related papers (2020-12-15T21:32:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.