Parity Calibration
- URL: http://arxiv.org/abs/2305.18655v2
- Date: Wed, 7 Jun 2023 23:14:34 GMT
- Title: Parity Calibration
- Authors: Youngseog Chung, Aaron Rumack, Chirag Gupta
- Abstract summary: We introduce the notion of parity calibration, which captures the goal of forecasting for the increase-decrease event in a timeseries.
We show that such a strategy leads to theoretical unpredictability and poor practical performance.
We demonstrate the effectiveness of our approach on real-world case studies in epidemiology, weather forecasting, and model-based control in nuclear fusion.
- Score: 5.768816587293478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a sequential regression setting, a decision-maker may be primarily
concerned with whether the future observation will increase or decrease
compared to the current one, rather than the actual value of the future
observation. In this context, we introduce the notion of parity calibration,
which captures the goal of calibrated forecasting for the increase-decrease (or
"parity") event in a timeseries. Parity probabilities can be extracted from a
forecasted distribution for the output, but we show that such a strategy leads
to theoretical unpredictability and poor practical performance. We then observe
that although the original task was regression, parity calibration can be
expressed as binary calibration. Drawing on this connection, we use an online
binary calibration method to achieve parity calibration. We demonstrate the
effectiveness of our approach on real-world case studies in epidemiology,
weather forecasting, and model-based control in nuclear fusion.
Related papers
- Calibration Strategies for Robust Causal Estimation: Theoretical and Empirical Insights on Propensity Score Based Estimators [0.6562256987706128]
partitioning of data for estimation and calibration critically impacts the performance of propensity score based estimators.
We extend recent advances in calibration techniques for propensity score estimation, improving the robustness of propensity scores in challenging settings.
arXiv Detail & Related papers (2025-03-21T16:41:10Z) - Calibrated Probabilistic Forecasts for Arbitrary Sequences [58.54729945445505]
Real-world data streams can change unpredictably due to distribution shifts, feedback loops and adversarial actors.
We present a forecasting framework ensuring valid uncertainty estimates regardless of how data evolves.
arXiv Detail & Related papers (2024-09-27T21:46:42Z) - Risk and cross validation in ridge regression with correlated samples [72.59731158970894]
We provide training examples for the in- and out-of-sample risks of ridge regression when the data points have arbitrary correlations.
We further extend our analysis to the case where the test point has non-trivial correlations with the training set, setting often encountered in time series forecasting.
We validate our theory across a variety of high dimensional data.
arXiv Detail & Related papers (2024-08-08T17:27:29Z) - Self-Calibrating Conformal Prediction [16.606421967131524]
We introduce Self-Calibrating Conformal Prediction to deliver calibrated point predictions alongside prediction intervals with finite-sample validity conditional on these predictions.
We show that our method improves calibrated interval efficiency through model calibration and offers a practical alternative to feature-conditional validity.
arXiv Detail & Related papers (2024-02-11T21:12:21Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - A Large-Scale Study of Probabilistic Calibration in Neural Network
Regression [3.13468877208035]
We conduct the largest empirical study to date to assess the probabilistic calibration of neural networks.
We introduce novel differentiable recalibration and regularization methods, uncovering new insights into their effectiveness.
arXiv Detail & Related papers (2023-06-05T09:33:39Z) - Distribution-Free Model-Agnostic Regression Calibration via
Nonparametric Methods [9.662269016653296]
We consider an individual calibration objective for characterizing the quantiles of the prediction model.
Existing methods have been largely and lack of statistical guarantee in terms of individual calibration.
We propose simple nonparametric calibration methods that are agnostic of the underlying prediction model.
arXiv Detail & Related papers (2023-05-20T21:31:51Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - Modular Conformal Calibration [80.33410096908872]
We introduce a versatile class of algorithms for recalibration in regression.
This framework allows one to transform any regression model into a calibrated probabilistic model.
We conduct an empirical study of MCC on 17 regression datasets.
arXiv Detail & Related papers (2022-06-23T03:25:23Z) - Recalibrating probabilistic forecasts of epidemics [13.447680826767183]
We present a recalibration method that can be applied to a black-box forecaster given retrospective forecasts and observations.
This method is guaranteed to improve calibration and log score performance when trained and measured in-sample.
We apply this recalibration method to the 27 influenza forecasters in the FluSight Network and show that recalibration reliably improves forecast accuracy and calibration.
arXiv Detail & Related papers (2021-12-12T19:22:24Z) - Learning Prediction Intervals for Regression: Generalization and
Calibration [12.576284277353606]
We study the generation of prediction intervals in regression for uncertainty quantification.
We use a general learning theory to characterize the optimality-feasibility tradeoff that encompasses Lipschitz continuity and VC-subgraph classes.
We empirically demonstrate the strengths of our interval generation and calibration algorithms in terms of testing performances compared to existing benchmarks.
arXiv Detail & Related papers (2021-02-26T17:55:30Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.