Threshold Martingales and the Evolution of Forecasts
- URL: http://arxiv.org/abs/2105.06834v1
- Date: Fri, 14 May 2021 13:49:55 GMT
- Title: Threshold Martingales and the Evolution of Forecasts
- Authors: Dean P. Foster and Robert A. Stine
- Abstract summary: We introduce a martingale that characterizes two properties of evolving forecast distributions.
The threshold martingale measures the proportion of the forecast distribution lying below a threshold.
We apply threshold martingales first to forecasts from simulated models and then to models that predict the winner in professional basketball games.
- Score: 3.858078488714278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a martingale that characterizes two properties of
evolving forecast distributions. Ideal forecasts of a future event behave as
martingales, sequen- tially updating the forecast to leverage the available
information as the future event approaches. The threshold martingale introduced
here measures the proportion of the forecast distribution lying below a
threshold. In addition to being calibrated, a threshold martingale has
quadratic variation that accumulates to a total determined by a quantile of the
initial forecast distribution. Deviations from calibration or to- tal
volatility signal problems in the underlying model. Calibration adjustments are
well-known, and we augment these by introducing a martingale filter that
improves volatility while guaranteeing smaller mean squared error. Thus,
post-processing can rectify problems with calibration and volatility without
revisiting the original forecast- ing model. We apply threshold martingales
first to forecasts from simulated models and then to models that predict the
winner in professional basketball games.
Related papers
- ExtremeCast: Boosting Extreme Value Prediction for Global Weather Forecast [57.6987191099507]
We introduce Exloss, a novel loss function that performs asymmetric optimization and highlights extreme values to obtain accurate extreme weather forecast.
We also introduce a training-free extreme value enhancement strategy named ExEnsemble, which increases the variance of pixel values and improves the forecast robustness.
Our solution can achieve state-of-the-art performance in extreme weather prediction, while maintaining the overall forecast accuracy comparable to the top medium-range forecast models.
arXiv Detail & Related papers (2024-02-02T10:34:13Z) - Attention-Based Ensemble Pooling for Time Series Forecasting [55.2480439325792]
We propose a method for pooling that performs a weighted average over candidate model forecasts.
We test this method on two time-series forecasting problems: multi-step forecasting of the dynamics of the non-stationary Lorenz 63 equation, and one-step forecasting of the weekly incident deaths due to COVID-19.
arXiv Detail & Related papers (2023-10-24T22:59:56Z) - Multiclass Alignment of Confidence and Certainty for Network Calibration [10.15706847741555]
Recent studies reveal that deep neural networks (DNNs) are prone to making overconfident predictions.
We propose a new train-time calibration method, which features a simple, plug-and-play auxiliary loss known as multi-class alignment of predictive mean confidence and predictive certainty (MACC)
Our method achieves state-of-the-art calibration performance for both in-domain and out-domain predictions.
arXiv Detail & Related papers (2023-09-06T00:56:24Z) - Performative Prediction with Bandit Feedback: Learning through
Reparameterization [25.169419772432796]
We develop a framework that reparametrizes the performative prediction as a function of the induced data distribution.
We provide a regret bound that is sublinear in the total number of performative samples taken and is only in the dimension of the model parameter.
On the application side, we believe our method is useful for large online recommendation systems like YouTube or TokTok.
arXiv Detail & Related papers (2023-05-01T21:31:29Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Toward Reliable Human Pose Forecasting with Uncertainty [51.628234388046195]
We develop an open-source library for human pose forecasting, including multiple models, supporting several datasets.
We devise two types of uncertainty in the problem to increase performance and convey better trust.
arXiv Detail & Related papers (2023-04-13T17:56:08Z) - Forecast Hedging and Calibration [8.858351266850544]
We develop the concept of forecast hedging, which consists of choosing the forecasts so as to guarantee the expected track record can only improve.
This yields all the calibration results by the same simple argument while differentiating between them by the forecast-hedging tools used.
Additional contributions are an improved definition of continuous calibration, ensuing game dynamics that yield Nashlibria in the long run, and a new forecasting procedure for binary events that is simpler than all known such procedures.
arXiv Detail & Related papers (2022-10-13T16:48:25Z) - CovarianceNet: Conditional Generative Model for Correct Covariance
Prediction in Human Motion Prediction [71.31516599226606]
We present a new method to correctly predict the uncertainty associated with the predicted distribution of future trajectories.
Our approach, CovariaceNet, is based on a Conditional Generative Model with Gaussian latent variables.
arXiv Detail & Related papers (2021-09-07T09:38:24Z) - Low Rank Forecasting [0.0]
We consider the problem of forecasting multiple values of the future of a vector time series, using some past values.
Our focus is on low rank forecasters, which break forecasting up into two steps.
We introduce the concept of forecast consistency, which means that the estimates of the same value made at different times are consistent.
arXiv Detail & Related papers (2021-01-29T05:59:19Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.