Integrating Uncertainty Awareness into Conformalized Quantile Regression
- URL: http://arxiv.org/abs/2306.08693v2
- Date: Tue, 12 Mar 2024 14:58:30 GMT
- Title: Integrating Uncertainty Awareness into Conformalized Quantile Regression
- Authors: Raphael Rossellini, Rina Foygel Barber, Rebecca Willett
- Abstract summary: We propose a new variant of the Conformalized Quantile Regression (CQR) methodology to adjust quantile regressors differentially across the feature space.
Compared to CQR, our methods enjoy the same distribution-free theoretical coverage guarantees, while demonstrating stronger conditional coverage properties in simulated settings and real-world data sets alike.
- Score: 12.875863572064986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conformalized Quantile Regression (CQR) is a recently proposed method for
constructing prediction intervals for a response $Y$ given covariates $X$,
without making distributional assumptions. However, existing constructions of
CQR can be ineffective for problems where the quantile regressors perform
better in certain parts of the feature space than others. The reason is that
the prediction intervals of CQR do not distinguish between two forms of
uncertainty: first, the variability of the conditional distribution of $Y$
given $X$ (i.e., aleatoric uncertainty), and second, our uncertainty in
estimating this conditional distribution (i.e., epistemic uncertainty). This
can lead to intervals that are overly narrow in regions where epistemic
uncertainty is high. To address this, we propose a new variant of the CQR
methodology, Uncertainty-Aware CQR (UACQR), that explicitly separates these two
sources of uncertainty to adjust quantile regressors differentially across the
feature space. Compared to CQR, our methods enjoy the same distribution-free
theoretical coverage guarantees, while demonstrating in our experiments
stronger conditional coverage properties in simulated settings and real-world
data sets alike.
Related papers
- Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Echoes of Socratic Doubt: Embracing Uncertainty in Calibrated Evidential Reinforcement Learning [1.7898305876314982]
The proposed algorithm combines deep evidential learning with quantile calibration based on principles of conformal inference.
It is tested on a suite of miniaturized Atari games (i.e., MinAtar)
arXiv Detail & Related papers (2024-02-11T05:17:56Z) - Unified Uncertainty Calibration [43.733911707842005]
We introduce emphunified uncertainty calibration (U2C), a holistic framework to combine aleatoric and uncertainty uncertainties.
U2C enables a clean learning-theoretical analysis of uncertainty estimation, and outperforms reject-or-classify across a variety of ImageNet benchmarks.
arXiv Detail & Related papers (2023-10-02T13:42:36Z) - Will My Robot Achieve My Goals? Predicting the Probability that an MDP Policy Reaches a User-Specified Behavior Target [56.99669411766284]
As an autonomous system performs a task, it should maintain a calibrated estimate of the probability that it will achieve the user's goal.
This paper considers settings where the user's goal is specified as a target interval for a real-valued performance summary.
We compute the probability estimates by inverting conformal prediction.
arXiv Detail & Related papers (2022-11-29T18:41:20Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - Improving Conditional Coverage via Orthogonal Quantile Regression [12.826754199680472]
We develop a method to generate prediction intervals that have a user-specified coverage level across all regions of feature-space.
We modify the loss function to promote independence between the size of the intervals and the indicator of a miscoverage event.
We empirically show that the modified loss function leads to improved conditional coverage, as evaluated by several metrics.
arXiv Detail & Related papers (2021-06-01T11:02:29Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Distribution-free uncertainty quantification for classification under
label shift [105.27463615756733]
We focus on uncertainty quantification (UQ) for classification problems via two avenues.
We first argue that label shift hurts UQ, by showing degradation in coverage and calibration.
We examine these techniques theoretically in a distribution-free framework and demonstrate their excellent practical performance.
arXiv Detail & Related papers (2021-03-04T20:51:03Z) - Distribution-free binary classification: prediction sets, confidence
intervals and calibration [106.50279469344937]
We study three notions of uncertainty quantification -- calibration, confidence intervals and prediction sets -- for binary classification in the distribution-free setting.
We derive confidence intervals for binned probabilities for both fixed-width and uniform-mass binning.
As a consequence of our 'tripod' theorems, these confidence intervals for binned probabilities lead to distribution-free calibration.
arXiv Detail & Related papers (2020-06-18T14:17:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.