Stronger Calibration Lower Bounds via Sidestepping
- URL: http://arxiv.org/abs/2012.03454v3
- Date: Fri, 6 Oct 2023 04:05:12 GMT
- Title: Stronger Calibration Lower Bounds via Sidestepping
- Authors: Mingda Qiao, Gregory Valiant
- Abstract summary: We consider an online binary prediction setting where a forecaster observes a sequence of $T$ bits one by one.
The forecaster is called well-calibrated if for each $p in [0, 1]$, among the $n_p$ bits for which the forecaster predicts probability $p$, is indeed equal to $p cdot n_p$.
The calibration error, defined as $sum_p |m_p - p n_p|$, quantifies the extent to which the forecaster deviates from being well-calibrated
- Score: 18.383889039268222
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider an online binary prediction setting where a forecaster observes a
sequence of $T$ bits one by one. Before each bit is revealed, the forecaster
predicts the probability that the bit is $1$. The forecaster is called
well-calibrated if for each $p \in [0, 1]$, among the $n_p$ bits for which the
forecaster predicts probability $p$, the actual number of ones, $m_p$, is
indeed equal to $p \cdot n_p$. The calibration error, defined as $\sum_p |m_p -
p n_p|$, quantifies the extent to which the forecaster deviates from being
well-calibrated. It has long been known that an $O(T^{2/3})$ calibration error
is achievable even when the bits are chosen adversarially, and possibly based
on the previous predictions. However, little is known on the lower bound side,
except an $\Omega(\sqrt{T})$ bound that follows from the trivial example of
independent fair coin flips.
In this paper, we prove an $\Omega(T^{0.528})$ bound on the calibration
error, which is the first super-$\sqrt{T}$ lower bound for this setting to the
best of our knowledge. The technical contributions of our work include two
lower bound techniques, early stopping and sidestepping, which circumvent the
obstacles that have previously hindered strong calibration lower bounds. We
also propose an abstraction of the prediction setting, termed the
Sign-Preservation game, which may be of independent interest. This game has a
much smaller state space than the full prediction setting and allows simpler
analyses. The $\Omega(T^{0.528})$ lower bound follows from a general reduction
theorem that translates lower bounds on the game value of Sign-Preservation
into lower bounds on the calibration error.
Related papers
- Breaking the $T^{2/3}$ Barrier for Sequential Calibration [26.563792462828726]
We study the problem of online calibrated forecasting of binary sequences.
Foster & Vohra (1998) derived an algorithm with $O(T2/3)$ calibration error after $T$ time steps, and showed a lower bound of $Omega(T1/2)$.
Qiao & Valiant (2021) improved the lower bound to $Omega(T0.528)$ by introducing a game called sign preservation and showing that lower bounds for this game imply lower bounds for calibration.
arXiv Detail & Related papers (2024-06-19T16:19:39Z) - Mind the Gap: A Causal Perspective on Bias Amplification in Prediction & Decision-Making [58.06306331390586]
We introduce the notion of a margin complement, which measures how much a prediction score $S$ changes due to a thresholding operation.
We show that under suitable causal assumptions, the influences of $X$ on the prediction score $S$ are equal to the influences of $X$ on the true outcome $Y$.
arXiv Detail & Related papers (2024-05-24T11:22:19Z) - On the Distance from Calibration in Sequential Prediction [4.14360329494344]
We study a sequential binary prediction setting where the forecaster is evaluated in terms of the calibration distance.
The calibration distance is a natural and intuitive measure of deviation from perfect calibration.
We prove that there is a forecasting algorithm that achieves an $O(sqrtT)$ calibration distance in expectation on an adversarially chosen sequence of $T$ binary outcomes.
arXiv Detail & Related papers (2024-02-12T07:37:19Z) - Better and Simpler Lower Bounds for Differentially Private Statistical
Estimation [7.693388437377614]
We prove that for any $alpha le O(1)$, estimating the covariance of a Gaussian up to spectral error $alpha$ requires $tildeOmegaleft(fracd3/2alpha varepsilon + fracdalpha2right)$ samples.
Next, we prove that estimating the mean of a heavy-tailed distribution with bounded $k$th moments requires $tildeOmegaleft(fracdalphak/(k-1) varepsilon +
arXiv Detail & Related papers (2023-10-10T04:02:43Z) - Minimal Expected Regret in Linear Quadratic Control [79.81807680370677]
We devise an online learning algorithm and provide guarantees on its expected regret.
This regret at time $T$ is upper bounded (i) by $widetildeO((d_u+d_x)sqrtd_xT)$ when $A$ and $B$ are unknown.
arXiv Detail & Related papers (2021-09-29T14:07:21Z) - Evaluating Probabilistic Inference in Deep Learning: Beyond Marginal
Predictions [37.92747047688873]
We show that the commonly-used $tau=1$ can be insufficient to drive good decisions in many settings of interest.
We also show that, as $tau$ grows, performing well according to $mathbfd_mathrmKLtau$ recovers universal guarantees for any possible decision.
arXiv Detail & Related papers (2021-07-20T01:55:01Z) - Stochastic Shortest Path: Minimax, Parameter-Free and Towards
Horizon-Free Regret [144.6358229217845]
We study the problem of learning in the shortest path (SSP) setting, where an agent seeks to minimize the expected cost accumulated before reaching a goal state.
We design a novel model-based algorithm EB-SSP that carefully skews the empirical transitions and perturbs the empirical costs with an exploration bonus.
We prove that EB-SSP achieves the minimax regret rate $widetildeO(B_star sqrtS A K)$, where $K$ is the number of episodes, $S$ is the number of states, $A$
arXiv Detail & Related papers (2021-04-22T17:20:48Z) - Near-Optimal SQ Lower Bounds for Agnostically Learning Halfspaces and
ReLUs under Gaussian Marginals [49.60752558064027]
We study the fundamental problems of agnostically learning halfspaces and ReLUs under Gaussian marginals.
Our lower bounds provide strong evidence that current upper bounds for these tasks are essentially best possible.
arXiv Detail & Related papers (2020-06-29T17:10:10Z) - On the robustness of the minimum $\ell_2$ interpolator [2.918940961856197]
We analyse the interpolator with minimal $ell$-norm $hatbeta$ in a general high dimensional linear regression framework.
We prove that, with high probability, the prediction loss of this estimator is bounded from above by $(|beta*|2r_cn(Sigma)vee |xi|2)/n$, where $r_k(Sigma)sum_igeq klambda_i(Sigma)$ are the rests of the
arXiv Detail & Related papers (2020-03-12T15:12:28Z) - Taking a hint: How to leverage loss predictors in contextual bandits? [63.546913998407405]
We study learning in contextual bandits with the help of loss predictors.
We show that the optimal regret is $mathcalO(minsqrtT, sqrtmathcalETfrac13)$ when $mathcalE$ is known.
arXiv Detail & Related papers (2020-03-04T07:36:38Z) - Tight Regret Bounds for Noisy Optimization of a Brownian Motion [118.65407541895851]
We consider the problem of Bayesian optimization of a one-dimensional Brownian motion in which the $T$ adaptively chosen observations are corrupted by Gaussian noise.
We show that as the smallest possible expected cumulative regret and the smallest possible expected simple regret scale as $Omega(sigmasqrtT cdot log T)$.
arXiv Detail & Related papers (2020-01-25T14:44:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.