Robust Decision Making with Partially Calibrated Forecasts
- URL: http://arxiv.org/abs/2510.23471v1
- Date: Mon, 27 Oct 2025 16:09:07 GMT
- Title: Robust Decision Making with Partially Calibrated Forecasts
- Authors: Shayan Kiyani, Hamed Hassani, George Pappas, Aaron Roth,
- Abstract summary: We study how a conservative decision maker should map predictions endowed with weaker (partial'') calibration guarantees to actions.<n>We characterize their minimax optimal decision rule via a duality argument, and show that surprisingly, trusting the predictions and acting accordingly'' is recovered in this minimax sense.
- Score: 32.96842403002962
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Calibration has emerged as a foundational goal in ``trustworthy machine learning'', in part because of its strong decision theoretic semantics. Independent of the underlying distribution, and independent of the decision maker's utility function, calibration promises that amongst all policies mapping predictions to actions, the uniformly best policy is the one that ``trusts the predictions'' and acts as if they were correct. But this is true only of \emph{fully calibrated} forecasts, which are tractable to guarantee only for very low dimensional prediction problems. For higher dimensional prediction problems (e.g. when outcomes are multiclass), weaker forms of calibration have been studied that lack these decision theoretic properties. In this paper we study how a conservative decision maker should map predictions endowed with these weaker (``partial'') calibration guarantees to actions, in a way that is robust in a minimax sense: i.e. to maximize their expected utility in the worst case over distributions consistent with the calibration guarantees. We characterize their minimax optimal decision rule via a duality argument, and show that surprisingly, ``trusting the predictions and acting accordingly'' is recovered in this minimax sense by \emph{decision calibration} (and any strictly stronger notion of calibration), a substantially weaker and more tractable condition than full calibration. For calibration guarantees that fall short of decision calibration, the minimax optimal decision rule is still efficiently computable, and we provide an empirical evaluation of a natural one that applies to any regression model solved to optimize squared error.
Related papers
- Nonparametric Distribution Regression Re-calibration [3.0204520109309847]
Minimizing overall prediction error encourages models to prioritize informativeness over calibration.<n>In safety-critical settings, trustworthy uncertainty estimates are often more valuable than narrow intervals.<n>We propose a novel non-parametric re-calibration algorithm based on conditional kernel mean embeddings.
arXiv Detail & Related papers (2026-02-13T11:48:43Z) - Uncertainty-Aware Post-Hoc Calibration: Mitigating Confidently Incorrect Predictions Beyond Calibration Metrics [6.9681910774977815]
This paper presents a post-hoc calibration framework to enhance calibration quality and uncertainty-aware decision-making.<n>A comprehensive evaluation is conducted using calibration metrics, uncertainty-aware performance measures, and empirical conformal coverage.<n> Experiments show that the proposed method achieves lower confidently incorrect predictions, and competitive Expected Error compared with isotonic and focal-loss baselines.
arXiv Detail & Related papers (2025-10-19T23:55:36Z) - Persuasive Calibration [15.651406777700517]
We adopt the standard calibration framework that regulates predictions to be unbiased conditional on their own value.<n>We show that the optimal predictor is over-(resp. under-confident) confident for high (resp. low) true expected outcomes, while remaining perfectly in the middle.
arXiv Detail & Related papers (2025-04-04T06:49:56Z) - Rethinking Early Stopping: Refine, Then Calibrate [49.966899634962374]
We present a novel variational formulation of the calibration-refinement decomposition.<n>We provide theoretical and empirical evidence that calibration and refinement errors are not minimized simultaneously during training.
arXiv Detail & Related papers (2025-01-31T15:03:54Z) - Truthfulness of Calibration Measures [18.21682539787221]
A calibration measure is said to be truthful if the forecaster minimizes expected penalty by predicting the conditional expectation of the next outcome.
This makes it an essential desideratum for calibration measures, alongside typical requirements, such as soundness and completeness.
We introduce a new calibration measure termed the Subsampled Smooth Error (SSCE) under which truthful prediction is optimal up to a constant multiplicative factor.
arXiv Detail & Related papers (2024-07-19T02:07:55Z) - Orthogonal Causal Calibration [55.28164682911196]
We develop general algorithms for reducing the task of causal calibration to that of calibrating a standard (non-causal) predictive model.<n>Our results are exceedingly general, showing that essentially any existing calibration algorithm can be used in causal settings.
arXiv Detail & Related papers (2024-06-04T03:35:25Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - When Does Optimizing a Proper Loss Yield Calibration? [12.684962113589515]
We show that any predictor with a local optimality satisfies smooth calibration.
We also show that the connection between local optimality and calibration error goes both ways.
arXiv Detail & Related papers (2023-05-30T05:53:34Z) - On Calibrating Semantic Segmentation Models: Analyses and An Algorithm [51.85289816613351]
We study the problem of semantic segmentation calibration.
Model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration.
We propose a simple, unifying, and effective approach, namely selective scaling.
arXiv Detail & Related papers (2022-12-22T22:05:16Z) - Calibrating Predictions to Decisions: A Novel Approach to Multi-Class
Calibration [118.26862029820447]
We introduce a new notion -- emphdecision calibration -- that requires the predicted distribution and true distribution to be indistinguishable'' to a set of downstream decision-makers.
Decision calibration improves decision-making on skin lesions and ImageNet classification with modern neural network.
arXiv Detail & Related papers (2021-07-12T20:17:28Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.