Robust Nonparametric Distribution Forecast with Backtest-based Bootstrap
and Adaptive Residual Selection
- URL: http://arxiv.org/abs/2202.07955v1
- Date: Wed, 16 Feb 2022 09:53:48 GMT
- Title: Robust Nonparametric Distribution Forecast with Backtest-based Bootstrap
and Adaptive Residual Selection
- Authors: Longshaokan Wang, Lingda Wang, Mina Georgieva, Paulo Machado, Abinaya
Ulagappa, Safwan Ahmed, Yan Lu, Arjun Bakshi, Farhad Ghassemi
- Abstract summary: Distribution forecast can quantify forecast uncertainty and provide various forecast scenarios with corresponding estimated probabilities.
We propose a practical and robust distribution forecast framework that relies on backtest-based bootstrap and adaptive residual selection.
- Score: 14.398720944586803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distribution forecast can quantify forecast uncertainty and provide various
forecast scenarios with their corresponding estimated probabilities. Accurate
distribution forecast is crucial for planning - for example when making
production capacity or inventory allocation decisions. We propose a practical
and robust distribution forecast framework that relies on backtest-based
bootstrap and adaptive residual selection. The proposed approach is robust to
the choice of the underlying forecasting model, accounts for uncertainty around
the input covariates, and relaxes the independence between residuals and
covariates assumption. It reduces the Absolute Coverage Error by more than 63%
compared to the classic bootstrap approaches and by 2% - 32% compared to a
variety of State-of-the-Art deep learning approaches on in-house product sales
data and M4-hourly competition data.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Tackling Missing Values in Probabilistic Wind Power Forecasting: A
Generative Approach [1.384633930654651]
We propose treating missing values and forecasting targets indifferently and predicting all unknown values simultaneously.
Compared with the traditional "impute, then predict" pipeline, the proposed approach achieves better performance in terms of continuous ranked probability score.
arXiv Detail & Related papers (2024-03-06T11:38:08Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2023-10-17T20:30:16Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2022-06-16T06:13:53Z) - Evaluation of Machine Learning Techniques for Forecast Uncertainty
Quantification [0.13999481573773068]
Ensemble forecasting is, so far, the most successful approach to produce relevant forecasts along with an estimation of their uncertainty.
Main limitations of ensemble forecasting are the high computational cost and the difficulty to capture and quantify different sources of uncertainty.
In this work proof-of-concept model experiments are conducted to examine the performance of ANNs trained to predict a corrected state of the system and the state uncertainty using only a single deterministic forecast as input.
arXiv Detail & Related papers (2021-11-29T16:52:17Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Learning Interpretable Deep State Space Model for Probabilistic Time
Series Forecasting [98.57851612518758]
Probabilistic time series forecasting involves estimating the distribution of future based on its history.
We propose a deep state space model for probabilistic time series forecasting whereby the non-linear emission model and transition model are parameterized by networks.
We show in experiments that our model produces accurate and sharp probabilistic forecasts.
arXiv Detail & Related papers (2021-01-31T06:49:33Z) - Robust Validation: Confident Predictions Even When Distributions Shift [19.327409270934474]
We describe procedures for robust predictive inference, where a model provides uncertainty estimates on its predictions rather than point predictions.
We present a method that produces prediction sets (almost exactly) giving the right coverage level for any test distribution in an $f$-divergence ball around the training population.
An essential component of our methodology is to estimate the amount of expected future data shift and build robustness to it.
arXiv Detail & Related papers (2020-08-10T17:09:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.