Nonparametric Distribution Regression Re-calibration
- URL: http://arxiv.org/abs/2602.13362v1
- Date: Fri, 13 Feb 2026 11:48:43 GMT
- Title: Nonparametric Distribution Regression Re-calibration
- Authors: Ádám Jung, Domokos M. Kelen, András A. Benczúr,
- Abstract summary: Minimizing overall prediction error encourages models to prioritize informativeness over calibration.<n>In safety-critical settings, trustworthy uncertainty estimates are often more valuable than narrow intervals.<n>We propose a novel non-parametric re-calibration algorithm based on conditional kernel mean embeddings.
- Score: 3.0204520109309847
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: A key challenge in probabilistic regression is ensuring that predictive distributions accurately reflect true empirical uncertainty. Minimizing overall prediction error often encourages models to prioritize informativeness over calibration, producing narrow but overconfident predictions. However, in safety-critical settings, trustworthy uncertainty estimates are often more valuable than narrow intervals. Realizing the problem, several recent works have focused on post-hoc corrections; however, existing methods either rely on weak notions of calibration (such as PIT uniformity) or impose restrictive parametric assumptions on the nature of the error. To address these limitations, we propose a novel nonparametric re-calibration algorithm based on conditional kernel mean embeddings, capable of correcting calibration error without restrictive modeling assumptions. For efficient inference with real-valued targets, we introduce a novel characteristic kernel over distributions that can be evaluated in $\mathcal{O}(n \log n)$ time for empirical distributions of size $n$. We demonstrate that our method consistently outperforms prior re-calibration approaches across a diverse set of regression benchmarks and model classes.
Related papers
- Total Robustness in Bayesian Nonlinear Regression for Measurement Error Problems under Model Misspecification [7.233732121762458]
We present the first Bayesian nonparametric framework targeting total robustness that tackles all three challenges in general nonlinear regression.<n>A gradient-based algorithm enables efficient computations; simulations and two real-world studies show lower estimation error and reduced estimation sensitivity to misspecification.<n>The framework, therefore, offers a practical and interpretable paradigm for trustworthy regression when data and models are jointly imperfect.
arXiv Detail & Related papers (2025-10-03T15:58:40Z) - A Novel Framework for Uncertainty Quantification via Proper Scores for Classification and Beyond [1.5229257192293202]
We propose a novel framework for uncertainty quantification in machine learning, which is based on proper scores.<n>Specifically, we use the kernel score, a kernel-based proper score, for evaluating sample-based generative models.<n>We generalize the calibration-sharpness decomposition beyond classification, which motivates the definition of proper calibration errors.
arXiv Detail & Related papers (2025-08-25T13:11:03Z) - Rethinking Early Stopping: Refine, Then Calibrate [49.966899634962374]
We present a novel variational formulation of the calibration-refinement decomposition.<n>We provide theoretical and empirical evidence that calibration and refinement errors are not minimized simultaneously during training.
arXiv Detail & Related papers (2025-01-31T15:03:54Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.<n>We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.<n>We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Multiclass Alignment of Confidence and Certainty for Network Calibration [10.15706847741555]
Recent studies reveal that deep neural networks (DNNs) are prone to making overconfident predictions.
We propose a new train-time calibration method, which features a simple, plug-and-play auxiliary loss known as multi-class alignment of predictive mean confidence and predictive certainty (MACC)
Our method achieves state-of-the-art calibration performance for both in-domain and out-domain predictions.
arXiv Detail & Related papers (2023-09-06T00:56:24Z) - Distribution-Free Model-Agnostic Regression Calibration via
Nonparametric Methods [9.662269016653296]
We consider an individual calibration objective for characterizing the quantiles of the prediction model.
Existing methods have been largely and lack of statistical guarantee in terms of individual calibration.
We propose simple nonparametric calibration methods that are agnostic of the underlying prediction model.
arXiv Detail & Related papers (2023-05-20T21:31:51Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Calibrated Reliable Regression using Maximum Mean Discrepancy [45.45024203912822]
Modern deep neural networks still produce unreliable predictive uncertainty.
In this paper, we are concerned with getting well-calibrated predictions in regression tasks.
Experiments on non-trivial real datasets show that our method can produce well-calibrated and sharp prediction intervals.
arXiv Detail & Related papers (2020-06-18T03:38:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.