Adaptive Conformal Prediction by Reweighting Nonconformity Score
- URL: http://arxiv.org/abs/2303.12695v2
- Date: Wed, 31 May 2023 17:15:22 GMT
- Title: Adaptive Conformal Prediction by Reweighting Nonconformity Score
- Authors: Salim I. Amoukou and Nicolas J.B Brunel
- Abstract summary: We use a Quantile Regression Forest (QRF) to learn the distribution of nonconformity scores and utilize the QRF's weights to assign more importance to samples with residuals similar to the test point.
Our approach enjoys an assumption-free finite sample marginal and training-conditional coverage, and under suitable assumptions, it also ensures conditional coverage.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite attractive theoretical guarantees and practical successes, Predictive
Interval (PI) given by Conformal Prediction (CP) may not reflect the
uncertainty of a given model. This limitation arises from CP methods using a
constant correction for all test points, disregarding their individual
uncertainties, to ensure coverage properties. To address this issue, we propose
using a Quantile Regression Forest (QRF) to learn the distribution of
nonconformity scores and utilizing the QRF's weights to assign more importance
to samples with residuals similar to the test point. This approach results in
PI lengths that are more aligned with the model's uncertainty. In addition, the
weights learnt by the QRF provide a partition of the features space, allowing
for more efficient computations and improved adaptiveness of the PI through
groupwise conformalization. Our approach enjoys an assumption-free finite
sample marginal and training-conditional coverage, and under suitable
assumptions, it also ensures conditional coverage. Our methods work for any
nonconformity score and are available as a Python package. We conduct
experiments on simulated and real-world data that demonstrate significant
improvements compared to existing methods.
Related papers
- Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Kernel-based optimally weighted conformal prediction intervals [12.814084012624916]
We present Kernel-based Optimally Weighted Conformal Prediction Intervals (KOWCPI)
KOWCPI adapts the classic Reweighted Nadaraya-Watson (RNW) estimator for quantile regression on dependent data.
We demonstrate the superior performance of KOWCPI on real time-series against state-of-the-art methods.
arXiv Detail & Related papers (2024-05-27T04:49:41Z) - Conformal Prediction with Learned Features [22.733758606168873]
We propose Partition Learning Conformal Prediction (PLCP) to improve conditional validity of prediction sets.
We implement PLCP efficiently with gradient alternating descent, utilizing off-the-shelf machine learning models.
Our experimental results over four real-world and synthetic datasets show the superior performance of PLCP.
arXiv Detail & Related papers (2024-04-26T15:43:06Z) - Mitigating LLM Hallucinations via Conformal Abstention [70.83870602967625]
We develop a principled procedure for determining when a large language model should abstain from responding in a general domain.
We leverage conformal prediction techniques to develop an abstention procedure that benefits from rigorous theoretical guarantees on the hallucination rate (error rate)
Experimentally, our resulting conformal abstention method reliably bounds the hallucination rate on various closed-book, open-domain generative question answering datasets.
arXiv Detail & Related papers (2024-04-04T11:32:03Z) - Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - Conformalized Unconditional Quantile Regression [27.528258690139793]
We develop a predictive inference procedure that combines conformal prediction with unconditional quantile regression.
We show that our procedure is adaptive to heteroscedasticity, provides transparent coverage guarantees that are relevant to the test instance at hand, and performs competitively with existing methods in terms of efficiency.
arXiv Detail & Related papers (2023-04-04T00:20:26Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - Post-Contextual-Bandit Inference [57.88785630755165]
Contextual bandit algorithms are increasingly replacing non-adaptive A/B tests in e-commerce, healthcare, and policymaking.
They can both improve outcomes for study participants and increase the chance of identifying good or even best policies.
To support credible inference on novel interventions at the end of the study, we still want to construct valid confidence intervals on average treatment effects, subgroup effects, or value of new policies.
arXiv Detail & Related papers (2021-06-01T12:01:51Z) - Testing for Outliers with Conformal p-values [14.158078752410182]
The goal is to test whether new independent samples belong to the same distribution as a reference data set or are outliers.
We propose a solution based on conformal inference, a broadly applicable framework which yields p-values that are marginally valid but mutually dependent for different test points.
We prove these p-values are positively dependent and enable exact false discovery rate control, although in a relatively weak marginal sense.
arXiv Detail & Related papers (2021-04-16T17:59:21Z) - Estimation and Applications of Quantiles in Deep Binary Classification [0.0]
Quantile regression, based on check loss, is a widely used inferential paradigm in Statistics.
We consider the analogue of check loss in the binary classification setting.
We develop individualized confidence scores that can be used to decide whether a prediction is reliable.
arXiv Detail & Related papers (2021-02-09T07:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.