Threshold Martingales and the Evolution of Forecasts
- URL: http://arxiv.org/abs/2105.06834v1
- Date: Fri, 14 May 2021 13:49:55 GMT
- Title: Threshold Martingales and the Evolution of Forecasts
- Authors: Dean P. Foster and Robert A. Stine
- Abstract summary: We introduce a martingale that characterizes two properties of evolving forecast distributions.
The threshold martingale measures the proportion of the forecast distribution lying below a threshold.
We apply threshold martingales first to forecasts from simulated models and then to models that predict the winner in professional basketball games.
- Score: 3.858078488714278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a martingale that characterizes two properties of
evolving forecast distributions. Ideal forecasts of a future event behave as
martingales, sequen- tially updating the forecast to leverage the available
information as the future event approaches. The threshold martingale introduced
here measures the proportion of the forecast distribution lying below a
threshold. In addition to being calibrated, a threshold martingale has
quadratic variation that accumulates to a total determined by a quantile of the
initial forecast distribution. Deviations from calibration or to- tal
volatility signal problems in the underlying model. Calibration adjustments are
well-known, and we augment these by introducing a martingale filter that
improves volatility while guaranteeing smaller mean squared error. Thus,
post-processing can rectify problems with calibration and volatility without
revisiting the original forecast- ing model. We apply threshold martingales
first to forecasts from simulated models and then to models that predict the
winner in professional basketball games.
Related papers
- Enforcing tail calibration when training probabilistic forecast models [0.0]
We study how the loss function used to train probabilistic forecast models can be adapted to improve the reliability of forecasts made for extreme events.<n>We demonstrate that state-of-the-art models do not issue calibrated forecasts for extreme wind speeds, and that the calibration of forecasts for extreme events can be improved by suitable adaptations to the loss function during model training.
arXiv Detail & Related papers (2025-06-16T16:51:06Z) - HopCast: Calibration of Autoregressive Dynamics Models [0.0]
This work introduces an alternative Predictor-Corrector approach named hop that uses Modern Hopfield Networks (MHN) to learn the errors of a deterministic Predictor.<n>The Corrector predicts a set of errors for the Predictor's output based on a context state at any timestep during autoregression.<n>The calibration and prediction performances are evaluated across a set of dynamical systems.
arXiv Detail & Related papers (2025-01-27T23:59:23Z) - ExtremeCast: Boosting Extreme Value Prediction for Global Weather Forecast [57.6987191099507]
We introduce Exloss, a novel loss function that performs asymmetric optimization and highlights extreme values to obtain accurate extreme weather forecast.
We also introduce ExBooster, which captures the uncertainty in prediction outcomes by employing multiple random samples.
Our solution can achieve state-of-the-art performance in extreme weather prediction, while maintaining the overall forecast accuracy comparable to the top medium-range forecast models.
arXiv Detail & Related papers (2024-02-02T10:34:13Z) - Attention-Based Ensemble Pooling for Time Series Forecasting [55.2480439325792]
We propose a method for pooling that performs a weighted average over candidate model forecasts.
We test this method on two time-series forecasting problems: multi-step forecasting of the dynamics of the non-stationary Lorenz 63 equation, and one-step forecasting of the weekly incident deaths due to COVID-19.
arXiv Detail & Related papers (2023-10-24T22:59:56Z) - Multiclass Alignment of Confidence and Certainty for Network Calibration [10.15706847741555]
Recent studies reveal that deep neural networks (DNNs) are prone to making overconfident predictions.
We propose a new train-time calibration method, which features a simple, plug-and-play auxiliary loss known as multi-class alignment of predictive mean confidence and predictive certainty (MACC)
Our method achieves state-of-the-art calibration performance for both in-domain and out-domain predictions.
arXiv Detail & Related papers (2023-09-06T00:56:24Z) - Performative Prediction with Bandit Feedback: Learning through Reparameterization [23.039885534575966]
performative prediction is a framework for studying social prediction in which the data distribution itself changes in response to the deployment of a model.
We develop a reparametization that reparametrizes the performative prediction objective as a function of induced data distribution.
arXiv Detail & Related papers (2023-05-01T21:31:29Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Toward Reliable Human Pose Forecasting with Uncertainty [51.628234388046195]
We develop an open-source library for human pose forecasting, including multiple models, supporting several datasets.
We devise two types of uncertainty in the problem to increase performance and convey better trust.
arXiv Detail & Related papers (2023-04-13T17:56:08Z) - Forecast Hedging and Calibration [8.858351266850544]
We develop the concept of forecast hedging, which consists of choosing the forecasts so as to guarantee the expected track record can only improve.
This yields all the calibration results by the same simple argument while differentiating between them by the forecast-hedging tools used.
Additional contributions are an improved definition of continuous calibration, ensuing game dynamics that yield Nashlibria in the long run, and a new forecasting procedure for binary events that is simpler than all known such procedures.
arXiv Detail & Related papers (2022-10-13T16:48:25Z) - CovarianceNet: Conditional Generative Model for Correct Covariance
Prediction in Human Motion Prediction [71.31516599226606]
We present a new method to correctly predict the uncertainty associated with the predicted distribution of future trajectories.
Our approach, CovariaceNet, is based on a Conditional Generative Model with Gaussian latent variables.
arXiv Detail & Related papers (2021-09-07T09:38:24Z) - Low Rank Forecasting [0.0]
We consider the problem of forecasting multiple values of the future of a vector time series, using some past values.
Our focus is on low rank forecasters, which break forecasting up into two steps.
We introduce the concept of forecast consistency, which means that the estimates of the same value made at different times are consistent.
arXiv Detail & Related papers (2021-01-29T05:59:19Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.