Approximating Full Conformal Prediction at Scale via Influence Functions
- URL: http://arxiv.org/abs/2202.01315v1
- Date: Wed, 2 Feb 2022 22:38:40 GMT
- Title: Approximating Full Conformal Prediction at Scale via Influence Functions
- Authors: Javier Abad, Umang Bhatt, Adrian Weller and Giovanni Cherubin
- Abstract summary: Conformal prediction (CP) is a wrapper around traditional machine learning models.
In this paper, we use influence functions to efficiently approximate full CP.
- Score: 30.391742057634264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conformal prediction (CP) is a wrapper around traditional machine learning
models, giving coverage guarantees under the sole assumption of
exchangeability; in classification problems, for a chosen significance level
$\varepsilon$, CP guarantees that the number of errors is at most
$\varepsilon$, irrespective of whether the underlying model is misspecified.
However, the prohibitive computational costs of full CP led researchers to
design scalable alternatives, which alas do not attain the same guarantees or
statistical power of full CP. In this paper, we use influence functions to
efficiently approximate full CP. We prove that our method is a consistent
approximation of full CP, and empirically show that the approximation error
becomes smaller as the training set increases; e.g., for $10^{3}$ training
points the two methods output p-values that are $<10^{-3}$ apart: a negligible
error for any practical application. Our methods enable scaling full CP to
large real-world datasets. We compare our full CP approximation ACP to
mainstream CP alternatives, and observe that our method is computationally
competitive whilst enjoying the statistical predictive power of full CP.
Related papers
- Robust Yet Efficient Conformal Prediction Sets [53.78604391939934]
Conformal prediction (CP) can convert any model's output into prediction sets guaranteed to include the true label.
We derive provably robust sets by bounding the worst-case change in conformity scores.
arXiv Detail & Related papers (2024-07-12T10:59:44Z) - Verifiably Robust Conformal Prediction [1.391198481393699]
This paper introduces VRCP (Verifiably Robust Conformal Prediction), a new framework that leverages neural network verification methods to recover coverage guarantees under adversarial attacks.
Our method is the first to support perturbations bounded by arbitrary norms including $ell1$, $ell2$, and $ellinfty$, as well as regression tasks.
In every case, VRCP achieves above nominal coverage and yields significantly more efficient and informative prediction regions than the SotA.
arXiv Detail & Related papers (2024-05-29T09:50:43Z) - On Temperature Scaling and Conformal Prediction of Deep Classifiers [9.975341265604577]
Conformal Prediction (CP) produces a prediction set of candidate labels that contains the true label with a user-specified probability.
In practice, both types of indications are desirable, yet, so far the interplay between them has not been investigated.
We show that while Temperature Scaling (TS) calibration improves the class-conditional coverage of adaptive CP methods, surprisingly, it negatively affects their prediction set sizes.
arXiv Detail & Related papers (2024-02-08T16:45:12Z) - Efficient Conformal Prediction under Data Heterogeneity [79.35418041861327]
Conformal Prediction (CP) stands out as a robust framework for uncertainty quantification.
Existing approaches for tackling non-exchangeability lead to methods that are not computable beyond the simplest examples.
This work introduces a new efficient approach to CP that produces provably valid confidence sets for fairly general non-exchangeable data distributions.
arXiv Detail & Related papers (2023-12-25T20:02:51Z) - Probabilistically robust conformal prediction [9.401004747930974]
Conformal prediction (CP) is a framework to quantify uncertainty of machine learning classifiers including deep neural networks.
Almost all the existing work on CP assumes clean testing data and there is not much known about the robustness of CP algorithms.
This paper studies the problem of probabilistically robust conformal prediction (PRCP) which ensures robustness to most perturbations.
arXiv Detail & Related papers (2023-07-31T01:32:06Z) - Learning Optimal Conformal Classifiers [32.68483191509137]
Conformal prediction (CP) is used to predict confidence sets containing the true class with a user-specified probability.
This paper explores strategies to differentiate through CP during training with the goal of training model with the conformal wrapper end-to-end.
We show that conformal training (ConfTr) outperforms state-of-the-art CP methods for classification by reducing the average confidence set size.
arXiv Detail & Related papers (2021-10-18T11:25:33Z) - Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality [131.45028999325797]
We develop a doubly robust off-policy AC (DR-Off-PAC) for discounted MDP.
DR-Off-PAC adopts a single timescale structure, in which both actor and critics are updated simultaneously with constant stepsize.
We study the finite-time convergence rate and characterize the sample complexity for DR-Off-PAC to attain an $epsilon$-accurate optimal policy.
arXiv Detail & Related papers (2021-02-23T18:56:13Z) - Exact Optimization of Conformal Predictors via Incremental and
Decremental Learning [46.9970555048259]
Conformal Predictors (CP) are wrappers around ML methods, providing error guarantees under weak assumptions on the data distribution.
They are suitable for a wide range of problems, from classification and regression to anomaly detection.
We show that it is possible to speed up a CP classifier considerably, by studying it in conjunction with the underlying ML method, and by exploiting incremental&decremental learning.
arXiv Detail & Related papers (2021-02-05T15:31:37Z) - On the Practicality of Differential Privacy in Federated Learning by
Tuning Iteration Times [51.61278695776151]
Federated Learning (FL) is well known for its privacy protection when training machine learning models among distributed clients collaboratively.
Recent studies have pointed out that the naive FL is susceptible to gradient leakage attacks.
Differential Privacy (DP) emerges as a promising countermeasure to defend against gradient leakage attacks.
arXiv Detail & Related papers (2021-01-11T19:43:12Z) - Provably Efficient Safe Exploration via Primal-Dual Policy Optimization [105.7510838453122]
We study the Safe Reinforcement Learning (SRL) problem using the Constrained Markov Decision Process (CMDP) formulation.
We present an provably efficient online policy optimization algorithm for CMDP with safe exploration in the function approximation setting.
arXiv Detail & Related papers (2020-03-01T17:47:03Z) - Computing Valid p-value for Optimal Changepoint by Selective Inference
using Dynamic Programming [21.361641617994714]
We introduce a novel method to perform statistical inference on the significance of changepoints (CPs)
Based on the selective inference (SI) framework, we propose an exact (non-asymptotic) approach to compute valid p-values for testing the significance of the CPs.
We conduct experiments on both synthetic and real-world datasets, through which we offer evidence that our proposed method is more powerful than existing methods.
arXiv Detail & Related papers (2020-02-21T05:07:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.