Uncertainty estimation for time series forecasting via Gaussian process
regression surrogates
- URL: http://arxiv.org/abs/2302.02834v1
- Date: Mon, 6 Feb 2023 14:52:56 GMT
- Title: Uncertainty estimation for time series forecasting via Gaussian process
regression surrogates
- Authors: Leonid Erlygin, Vladimir Zholobov, Valeriia Baklanova, Evgeny
Sokolovskiy, Alexey Zaytsev
- Abstract summary: We propose a new method for uncertainty estimation based on the surrogate Gaussian process model.
Our method can equip any base model with an accurate uncertainty estimate produced by a separate surrogate.
Compared to other approaches, the estimate remains computationally effective with training only one additional model.
- Score: 0.8733767481819791
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Machine learning models are widely used to solve real-world problems in
science and industry. To build robust models, we should quantify the
uncertainty of the model's predictions on new data. This study proposes a new
method for uncertainty estimation based on the surrogate Gaussian process
model. Our method can equip any base model with an accurate uncertainty
estimate produced by a separate surrogate. Compared to other approaches, the
estimate remains computationally effective with training only one additional
model and doesn't rely on data-specific assumptions. The only requirement is
the availability of the base model as a black box, which is typical.
Experiments for challenging time-series forecasting data show that surrogate
model-based methods provide more accurate confidence intervals than
bootstrap-based methods in both medium and small-data regimes and different
families of base models, including linear regression, ARIMA, and gradient
boosting.
Related papers
- Learning Robust Statistics for Simulation-based Inference under Model
Misspecification [23.331522354991527]
We propose the first general approach to handle model misspecification that works across different classes of simulation-based inference methods.
We show that our method yields robust inference in misspecified scenarios, whilst still being accurate when the model is well-specified.
arXiv Detail & Related papers (2023-05-25T09:06:26Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Rigorous Assessment of Model Inference Accuracy using Language
Cardinality [5.584832154027001]
We develop a systematic approach that minimizes bias and uncertainty in model accuracy assessment by replacing statistical estimation with deterministic accuracy measures.
We experimentally demonstrate the consistency and applicability of our approach by assessing the accuracy of models inferred by state-of-the-art inference tools.
arXiv Detail & Related papers (2022-11-29T21:03:26Z) - Transfer Learning with Uncertainty Quantification: Random Effect
Calibration of Source to Target (RECaST) [1.8047694351309207]
We develop a statistical framework for model predictions based on transfer learning, called RECaST.
We mathematically and empirically demonstrate the validity of our RECaST approach for transfer learning between linear models.
We examine our method's performance in a simulation study and in an application to real hospital data.
arXiv Detail & Related papers (2022-11-29T19:39:47Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Stability of clinical prediction models developed using statistical or
machine learning methods [0.5482532589225552]
Clinical prediction models estimate an individual's risk of a particular health outcome, conditional on their values of multiple predictors.
Many models are developed using small datasets that lead to instability in the model and its predictions (estimated risks)
We show instability in a model's estimated risks is often considerable, and manifests itself as miscalibration of predictions in new data.
arXiv Detail & Related papers (2022-11-02T11:55:28Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - VAE-LIME: Deep Generative Model Based Approach for Local Data-Driven
Model Interpretability Applied to the Ironmaking Industry [70.10343492784465]
It is necessary to expose to the process engineer, not solely the model predictions, but also their interpretability.
Model-agnostic local interpretability solutions based on LIME have recently emerged to improve the original method.
We present in this paper a novel approach, VAE-LIME, for local interpretability of data-driven models forecasting the temperature of the hot metal produced by a blast furnace.
arXiv Detail & Related papers (2020-07-15T07:07:07Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - Model Repair: Robust Recovery of Over-Parameterized Statistical Models [24.319310729283636]
A new type of robust estimation problem is introduced where the goal is to recover a statistical model that has been corrupted after it has been estimated from data.
Methods are proposed for "repairing" the model using only the design and not the response values used to fit the model in a supervised learning setting.
arXiv Detail & Related papers (2020-05-20T08:41:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.