Conformal online model aggregation
- URL: http://arxiv.org/abs/2403.15527v3
- Date: Mon, 20 Oct 2025 09:47:21 GMT
- Title: Conformal online model aggregation
- Authors: Matteo Gasparin, Aaditya Ramdas,
- Abstract summary: Conformal prediction equips machine learning models with a reasonable notion of uncertainty quantification without making strong distributional assumptions.<n>We propose a wrapper that takes in several conformal prediction sets and outputs a single adaptively-combined prediction set.<n>Our method, called conformal online model aggregation (COMA), is based on combining the prediction sets from several algorithms by weighted voting.
- Score: 36.191356601153146
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Conformal prediction equips machine learning models with a reasonable notion of uncertainty quantification without making strong distributional assumptions. It wraps around any prediction model and converts point predictions into set predictions with a predefined marginal coverage guarantee. However, conformal prediction only works if we fix the underlying machine learning model in advance. A relatively unaddressed issue in conformal prediction is that of model selection and/or aggregation: given a set of prediction models, which one should we conformalize? This paper suggests that instead of performing model selection, it can be prudent and practical to perform conformal set aggregation in an online, adaptive fashion. We propose a wrapper that takes in several conformal prediction sets (themselves wrapped around black-box prediction models), and outputs a single adaptively-combined prediction set. Our method, called conformal online model aggregation (COMA), is based on combining the prediction sets from several algorithms by weighted voting, and can be thought of as a sort of online stacking of the underlying conformal sets. As long as the input sets have (distribution-free) coverage guarantees, COMA retains coverage guarantees, under a negative correlation assumption between errors and weights. We verify that the assumption holds empirically in all settings considered. COMA is well-suited for decentralized or distributed settings, where different users may have different models, and are only willing to share their prediction sets for a new test point in a black-box fashion. As we demonstrate, it is also well-suited to settings with distribution drift and shift, where model selection can be imprudent.
Related papers
- Enhanced Multi-model Online Conformal Prediction [25.495949162960624]
Conformal prediction is a framework for uncertainty quantification that constructs prediction sets for previously unseen data.<n>The efficiency of these prediction sets, measured by their size, depends on the choice of the underlying learning model.<n>This work develops a novel multi-model online conformal prediction algorithm that reduces computational complexity and improves prediction efficiency.
arXiv Detail & Related papers (2026-01-04T23:44:43Z) - Distribution-informed Online Conformal Prediction [53.674678995825666]
We propose Conformal Optimistic Prediction (COP), an online conformal prediction algorithm incorporating underlying data pattern into the update rule.<n>COP produces tighter prediction sets when predictable pattern exists, while retaining valid coverage guarantees even when estimates are inaccurate.<n>We prove that COP can achieve valid coverage and construct shorter prediction intervals than other baselines.
arXiv Detail & Related papers (2025-12-08T17:51:49Z) - Graph-Structured Feedback Multimodel Ensemble Online Conformal Prediction [14.188004615463742]
We propose a novel multi-model online conformal prediction algorithm.<n>It identifies a subset of effective models at each time step by collecting feedback from a bipartite graph.<n>A model is then selected from this subset to construct the prediction set, resulting in reduced computational complexity and smaller prediction sets.
arXiv Detail & Related papers (2025-06-26T00:06:11Z) - HopCast: Calibration of Autoregressive Dynamics Models [0.0]
Deep learning models are often trained to approximate dynamical systems that can be modeled using differential equations.
These models are optimized to predict one step ahead and produce calibrated predictions if the predictive model can quantify uncertainty.
This work is the first to benchmark existing uncertainty propagation methods based on calibration errors with deep ensembles for multi-step predictions.
arXiv Detail & Related papers (2025-01-27T23:59:23Z) - Conformal Prediction Sets with Improved Conditional Coverage using Trust Scores [52.92618442300405]
It is impossible to achieve exact, distribution-free conditional coverage in finite samples.<n>We propose an alternative conformal prediction algorithm that targets coverage where it matters most.
arXiv Detail & Related papers (2025-01-17T12:01:56Z) - Multi-model Ensemble Conformal Prediction in Dynamic Environments [14.188004615463742]
We introduce a novel adaptive conformal prediction framework, where the model used for creating prediction sets is selected on the fly from multiple candidate models.
The proposed algorithm is proven to achieve strongly adaptive regret over all intervals while maintaining valid coverage.
arXiv Detail & Related papers (2024-11-06T05:57:28Z) - Stochastic Online Conformal Prediction with Semi-Bandit Feedback [29.334511328067777]
We consider the online learning setting, where examples arrive over time, and the goal is to construct prediction sets dynamically.<n>We propose a novel conformal prediction algorithm targeted at this setting, and prove that it obtains sublinear regret compared to the optimal conformal predictor.
arXiv Detail & Related papers (2024-05-22T00:42:49Z) - Awareness of uncertainty in classification using a multivariate model and multi-views [1.3048920509133808]
The proposed model regularizes uncertain predictions, and trains to calculate both the predictions and their uncertainty estimations.
Given the multi-view predictions together with their uncertainties and confidences, we proposed several methods to calculate final predictions.
The proposed methodology was tested using CIFAR-10 dataset with clean and noisy labels.
arXiv Detail & Related papers (2024-04-16T06:40:51Z) - Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Self-Calibrating Conformal Prediction [16.606421967131524]
We introduce Self-Calibrating Conformal Prediction to deliver calibrated point predictions alongside prediction intervals with finite-sample validity conditional on these predictions.
We show that our method improves calibrated interval efficiency through model calibration and offers a practical alternative to feature-conditional validity.
arXiv Detail & Related papers (2024-02-11T21:12:21Z) - Post-selection Inference for Conformal Prediction: Trading off Coverage
for Precision [0.0]
Traditionally, conformal prediction inference requires a data-independent specification of miscoverage level.
We develop simultaneous conformal inference to account for data-dependent miscoverage levels.
arXiv Detail & Related papers (2023-04-12T20:56:43Z) - Post-Selection Confidence Bounds for Prediction Performance [2.28438857884398]
In machine learning, the selection of a promising model from a potentially large number of competing models and the assessment of its generalization performance are critical tasks.
We propose an algorithm how to compute valid lower confidence bounds for multiple models that have been selected based on their prediction performances in the evaluation set.
arXiv Detail & Related papers (2022-10-24T13:28:43Z) - Predictive Inference with Feature Conformal Prediction [80.77443423828315]
We propose feature conformal prediction, which extends the scope of conformal prediction to semantic feature spaces.
From a theoretical perspective, we demonstrate that feature conformal prediction provably outperforms regular conformal prediction under mild assumptions.
Our approach could be combined with not only vanilla conformal prediction, but also other adaptive conformal prediction methods.
arXiv Detail & Related papers (2022-10-01T02:57:37Z) - Selective Prediction via Training Dynamics [31.708701583736644]
We show that state-of-the-art selective prediction performance can be attained solely from studying the training dynamics of a model.<n>In particular, we reject data points exhibiting too much disagreement with the final prediction at late stages in training.<n>The proposed rejection mechanism is domain-agnostic (i.e., it works for both discrete and real-valued prediction) and can be flexibly combined with existing selective prediction approaches.
arXiv Detail & Related papers (2022-05-26T17:51:29Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - CovarianceNet: Conditional Generative Model for Correct Covariance
Prediction in Human Motion Prediction [71.31516599226606]
We present a new method to correctly predict the uncertainty associated with the predicted distribution of future trajectories.
Our approach, CovariaceNet, is based on a Conditional Generative Model with Gaussian latent variables.
arXiv Detail & Related papers (2021-09-07T09:38:24Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Short-term prediction of Time Series based on bounding techniques [0.0]
This paper is reconsidered the prediction problem in time series framework by using a new non-parametric approach.
The innovation is to consider both deterministic and deterministic-stochastic assumptions in order to obtain the upper bound of the prediction error.
A benchmark is included to illustrate that the proposed predictor can obtain suitable results in a prediction scheme, and can be an interesting alternative method to the classical non-parametric methods.
arXiv Detail & Related papers (2021-01-26T11:27:36Z) - AutoCP: Automated Pipelines for Accurate Prediction Intervals [84.16181066107984]
This paper proposes an AutoML framework called Automatic Machine Learning for Conformal Prediction (AutoCP)
Unlike the familiar AutoML frameworks that attempt to select the best prediction model, AutoCP constructs prediction intervals that achieve the user-specified target coverage rate.
We tested AutoCP on a variety of datasets and found that it significantly outperforms benchmark algorithms.
arXiv Detail & Related papers (2020-06-24T23:13:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.