Private Prediction Sets
- URL: http://arxiv.org/abs/2102.06202v3
- Date: Sun, 3 Mar 2024 06:47:19 GMT
- Title: Private Prediction Sets
- Authors: Anastasios N. Angelopoulos and Stephen Bates and Tijana Zrnic and
Michael I. Jordan
- Abstract summary: Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
- Score: 72.75711776601973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In real-world settings involving consequential decision-making, the
deployment of machine learning systems generally requires both reliable
uncertainty quantification and protection of individuals' privacy. We present a
framework that treats these two desiderata jointly. Our framework is based on
conformal prediction, a methodology that augments predictive models to return
prediction sets that provide uncertainty quantification -- they provably cover
the true response with a user-specified probability, such as 90%. One might
hope that when used with privately-trained models, conformal prediction would
yield privacy guarantees for the resulting prediction sets; unfortunately, this
is not the case. To remedy this key problem, we develop a method that takes any
pre-trained predictive model and outputs differentially private prediction
sets. Our method follows the general approach of split conformal prediction; we
use holdout data to calibrate the size of the prediction sets but preserve
privacy by using a privatized quantile subroutine. This subroutine compensates
for the noise introduced to preserve privacy in order to guarantee correct
coverage. We evaluate the method on large-scale computer vision datasets.
Related papers
- Robust Conformal Prediction Using Privileged Information [17.886554223172517]
We develop a method to generate prediction sets with a guaranteed coverage rate that is robust to corruptions in the training data.
Our approach builds on conformal prediction, a powerful framework to construct prediction sets that are valid under the i.i.d assumption.
arXiv Detail & Related papers (2024-06-08T08:56:47Z) - Conformal online model aggregation [29.43493007296859]
This paper proposes a new approach towards conformal model aggregation in online settings.
It is based on combining the prediction sets from several algorithms by voting, where weights on the models are adapted over time based on past performance.
arXiv Detail & Related papers (2024-03-22T15:40:06Z) - Post-selection Inference for Conformal Prediction: Trading off Coverage
for Precision [0.0]
Traditionally, conformal prediction inference requires a data-independent specification of miscoverage level.
We develop simultaneous conformal inference to account for data-dependent miscoverage levels.
arXiv Detail & Related papers (2023-04-12T20:56:43Z) - One-Shot Federated Conformal Prediction [0.0]
We introduce a conformal prediction method to construct prediction sets in a oneshot federated learning setting.
We prove that for any distribution, it is possible to output prediction sets with desired coverage in only one round of communication.
arXiv Detail & Related papers (2023-02-13T12:46:39Z) - Distribution-Free Finite-Sample Guarantees and Split Conformal
Prediction [0.0]
split conformal prediction represents a promising avenue to obtain finite-sample guarantees under minimal distribution-free assumptions.
We highlight the connection between split conformal prediction and classical tolerance predictors developed in the 1940s.
arXiv Detail & Related papers (2022-10-26T14:12:24Z) - Conformal prediction for the design problem [72.14982816083297]
In many real-world deployments of machine learning, we use a prediction algorithm to choose what data to test next.
In such settings, there is a distinct type of distribution shift between the training and test data.
We introduce a method to quantify predictive uncertainty in such settings.
arXiv Detail & Related papers (2022-02-08T02:59:12Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - AutoCP: Automated Pipelines for Accurate Prediction Intervals [84.16181066107984]
This paper proposes an AutoML framework called Automatic Machine Learning for Conformal Prediction (AutoCP)
Unlike the familiar AutoML frameworks that attempt to select the best prediction model, AutoCP constructs prediction intervals that achieve the user-specified target coverage rate.
We tested AutoCP on a variety of datasets and found that it significantly outperforms benchmark algorithms.
arXiv Detail & Related papers (2020-06-24T23:13:11Z) - Individual Calibration with Randomized Forecasting [116.2086707626651]
We show that calibration for individual samples is possible in the regression setup if the predictions are randomized.
We design a training objective to enforce individual calibration and use it to train randomized regression functions.
arXiv Detail & Related papers (2020-06-18T05:53:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.