Boosted Conformal Prediction Intervals
- URL: http://arxiv.org/abs/2406.07449v2
- Date: Sat, 09 Nov 2024 18:37:37 GMT
- Title: Boosted Conformal Prediction Intervals
- Authors: Ran Xie, Rina Foygel Barber, Emmanuel J. Candès,
- Abstract summary: We employ machine learning techniques to improve upon a predefined conformity score function.
The boosted conformal procedure achieves substantial improvements in reducing interval length and decreasing deviation from target conditional coverage.
- Score: 5.762286612061953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a boosted conformal procedure designed to tailor conformalized prediction intervals toward specific desired properties, such as enhanced conditional coverage or reduced interval length. We employ machine learning techniques, notably gradient boosting, to systematically improve upon a predefined conformity score function. This process is guided by carefully constructed loss functions that measure the deviation of prediction intervals from the targeted properties. The procedure operates post-training, relying solely on model predictions and without modifying the trained model (e.g., the deep network). Systematic experiments demonstrate that starting from conventional conformal methods, our boosted procedure achieves substantial improvements in reducing interval length and decreasing deviation from target conditional coverage.
Related papers
- Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering [55.15192437680943]
Generative models lack rigorous statistical guarantees for their outputs.
We propose a sequential conformal prediction method producing prediction sets that satisfy a rigorous statistical guarantee.
This guarantee states that with high probability, the prediction sets contain at least one admissible (or valid) example.
arXiv Detail & Related papers (2024-10-02T15:26:52Z) - Conformal Thresholded Intervals for Efficient Regression [9.559062601251464]
Conformal Thresholded Intervals (CTI) is a novel conformal regression method that aims to produce the smallest possible prediction set with guaranteed coverage.
CTI constructs prediction sets by thresholding the estimated conditional interquantile intervals based on their length.
CTI achieves superior performance compared to state-of-the-art conformal regression methods across various datasets.
arXiv Detail & Related papers (2024-07-19T17:47:08Z) - Explain, Adapt and Retrain: How to improve the accuracy of a PPM
classifier through different explanation styles [4.6281736192809575]
Recent papers have introduced a novel approach to explain why a Predictive Process Monitoring model for outcome-oriented predictions provides wrong predictions.
We show how to exploit the explanations to identify the most common features that induce a predictor to make mistakes in a semi-automated way.
arXiv Detail & Related papers (2023-03-27T06:37:55Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Improved Online Conformal Prediction via Strongly Adaptive Online
Learning [86.4346936885507]
We develop new online conformal prediction methods that minimize the strongly adaptive regret.
We prove that our methods achieve near-optimal strongly adaptive regret for all interval lengths simultaneously.
Experiments show that our methods consistently obtain better coverage and smaller prediction sets than existing methods on real-world tasks.
arXiv Detail & Related papers (2023-02-15T18:59:30Z) - Loss-Controlling Calibration for Predictive Models [5.51361762392299]
We propose a learning framework for calibrating predictive models to make loss-controlling prediction for exchangeable data.
By comparison, the predictors built by the proposed loss-controlling approach are not limited to set predictors.
Our proposed method is applied to selective regression and high-impact weather forecasting problems.
arXiv Detail & Related papers (2023-01-11T09:44:55Z) - Predictive machine learning for prescriptive applications: a coupled
training-validating approach [77.34726150561087]
We propose a new method for training predictive machine learning models for prescriptive applications.
This approach is based on tweaking the validation step in the standard training-validating-testing scheme.
Several experiments with synthetic data demonstrate promising results in reducing the prescription costs in both deterministic and real models.
arXiv Detail & Related papers (2021-10-22T15:03:20Z) - Learning Prediction Intervals for Regression: Generalization and
Calibration [12.576284277353606]
We study the generation of prediction intervals in regression for uncertainty quantification.
We use a general learning theory to characterize the optimality-feasibility tradeoff that encompasses Lipschitz continuity and VC-subgraph classes.
We empirically demonstrate the strengths of our interval generation and calibration algorithms in terms of testing performances compared to existing benchmarks.
arXiv Detail & Related papers (2021-02-26T17:55:30Z) - Learning Randomly Perturbed Structured Predictors for Direct Loss
Minimization [18.981576950505442]
Direct loss minimization is a popular approach for learning predictors over structured label spaces.
We show that it balances better between the learned score function and the randomized noise in structured prediction.
arXiv Detail & Related papers (2020-07-11T08:59:11Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - BERT Loses Patience: Fast and Robust Inference with Early Exit [91.26199404912019]
We propose Patience-based Early Exit as a plug-and-play technique to improve the efficiency and robustness of a pretrained language model.
Our approach improves inference efficiency as it allows the model to make a prediction with fewer layers.
arXiv Detail & Related papers (2020-06-07T13:38:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.