Faster online calibration without randomization: interval forecasts and
the power of two choices
- URL: http://arxiv.org/abs/2204.13087v1
- Date: Wed, 27 Apr 2022 17:33:23 GMT
- Title: Faster online calibration without randomization: interval forecasts and
the power of two choices
- Authors: Chirag Gupta, Aaditya Ramdas
- Abstract summary: We study the problem of making calibrated probabilistic forecasts for a binary sequence generated by an adversarial nature.
Inspired by the works on the "power of two choices" and imprecise probability theory, we study a small variant of the standard online calibration problem.
- Score: 43.17917448937131
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of making calibrated probabilistic forecasts for a
binary sequence generated by an adversarial nature. Following the seminal paper
of Foster and Vohra (1998), nature is often modeled as an adaptive adversary
who sees all activity of the forecaster except the randomization that the
forecaster may deploy. A number of papers have proposed randomized forecasting
strategies that achieve an $\epsilon$-calibration error rate of
$O(1/\sqrt{T})$, which we prove is tight in general. On the other hand, it is
well known that it is not possible to be calibrated without randomization, or
if nature also sees the forecaster's randomization; in both cases the
calibration error could be $\Omega(1)$. Inspired by the equally seminal works
on the "power of two choices" and imprecise probability theory, we study a
small variant of the standard online calibration problem. The adversary gives
the forecaster the option of making two nearby probabilistic forecasts, or
equivalently an interval forecast of small width, and the endpoint closest to
the revealed outcome is used to judge calibration. This power of two choices,
or imprecise forecast, accords the forecaster with significant power -- we show
that a faster $\epsilon$-calibration rate of $O(1/T)$ can be achieved even
without deploying any randomization.
Related papers
- Experts Don't Cheat: Learning What You Don't Know By Predicting Pairs [35.92045337126979]
We propose a strategy for teaching a model to both approximate $p(Y|X)$ and also estimate the remaining gaps between $widehatp_theta(Y|X)$ and $p(Y|X)$.
We demonstrate that our approach accurately estimates how much models don't know across ambiguous image classification, (synthetic) language modeling, and partially-observable navigation tasks.
arXiv Detail & Related papers (2024-02-13T19:01:45Z) - Variational Prediction [95.00085314353436]
We present a technique for learning a variational approximation to the posterior predictive distribution using a variational bound.
This approach can provide good predictive distributions without test time marginalization costs.
arXiv Detail & Related papers (2023-07-14T18:19:31Z) - Conformal Nucleus Sampling [67.5232384936661]
We assess whether a top-$p$ set is indeed aligned with its probabilistic meaning in various linguistic contexts.
We find that OPT models are overconfident, and that calibration shows a moderate inverse scaling with model size.
arXiv Detail & Related papers (2023-05-04T08:11:57Z) - Test-time Recalibration of Conformal Predictors Under Distribution Shift
Based on Unlabeled Examples [30.61588337557343]
Conformal predictors provide uncertainty estimates by computing a set of classes with a user-specified probability.
We propose a method that provides excellent uncertainty estimates under natural distribution shifts.
arXiv Detail & Related papers (2022-10-09T04:46:00Z) - Calibration of Natural Language Understanding Models with Venn--ABERS
Predictors [0.0]
Transformers are prone to generate uncalibrated predictions or extreme probabilities.
We build several inductive Venn--ABERS predictors (IVAP) based on a selection of pre-trained transformers.
arXiv Detail & Related papers (2022-05-21T13:09:01Z) - CovarianceNet: Conditional Generative Model for Correct Covariance
Prediction in Human Motion Prediction [71.31516599226606]
We present a new method to correctly predict the uncertainty associated with the predicted distribution of future trajectories.
Our approach, CovariaceNet, is based on a Conditional Generative Model with Gaussian latent variables.
arXiv Detail & Related papers (2021-09-07T09:38:24Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - Individual Calibration with Randomized Forecasting [116.2086707626651]
We show that calibration for individual samples is possible in the regression setup if the predictions are randomized.
We design a training objective to enforce individual calibration and use it to train randomized regression functions.
arXiv Detail & Related papers (2020-06-18T05:53:10Z) - Estimation of Accurate and Calibrated Uncertainties in Deterministic
models [0.8702432681310401]
We devise a method to transform a deterministic prediction into a probabilistic one.
We show that for doing so, one has to compromise between the accuracy and the reliability (calibration) of such a model.
We show several examples both with synthetic data, where the underlying hidden noise can accurately be recovered, and with large real-world datasets.
arXiv Detail & Related papers (2020-03-11T04:02:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.