Private Prediction Sets
- URL: http://arxiv.org/abs/2102.06202v3
- Date: Sun, 3 Mar 2024 06:47:19 GMT
- Title: Private Prediction Sets
- Authors: Anastasios N. Angelopoulos and Stephen Bates and Tijana Zrnic and
Michael I. Jordan
- Abstract summary: Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
- Score: 72.75711776601973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In real-world settings involving consequential decision-making, the
deployment of machine learning systems generally requires both reliable
uncertainty quantification and protection of individuals' privacy. We present a
framework that treats these two desiderata jointly. Our framework is based on
conformal prediction, a methodology that augments predictive models to return
prediction sets that provide uncertainty quantification -- they provably cover
the true response with a user-specified probability, such as 90%. One might
hope that when used with privately-trained models, conformal prediction would
yield privacy guarantees for the resulting prediction sets; unfortunately, this
is not the case. To remedy this key problem, we develop a method that takes any
pre-trained predictive model and outputs differentially private prediction
sets. Our method follows the general approach of split conformal prediction; we
use holdout data to calibrate the size of the prediction sets but preserve
privacy by using a privatized quantile subroutine. This subroutine compensates
for the noise introduced to preserve privacy in order to guarantee correct
coverage. We evaluate the method on large-scale computer vision datasets.
Related papers
- Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.
Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.
We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering [55.15192437680943]
Generative models lack rigorous statistical guarantees for their outputs.
We propose a sequential conformal prediction method producing prediction sets that satisfy a rigorous statistical guarantee.
This guarantee states that with high probability, the prediction sets contain at least one admissible (or valid) example.
arXiv Detail & Related papers (2024-10-02T15:26:52Z) - Certification for Differentially Private Prediction in Gradient-Based Training [36.686002369773014]
We use convex relaxation and bound propagation to compute a provable upper-bound for the local and smooth sensitivity of a prediction.
This bound allows us to reduce the magnitude of noise added or improve privacy accounting in the private prediction setting.
arXiv Detail & Related papers (2024-06-19T10:47:00Z) - Conformal online model aggregation [29.43493007296859]
This paper proposes a new approach towards conformal model aggregation in online settings.
It is based on combining the prediction sets from several algorithms by voting, where weights on the models are adapted over time based on past performance.
arXiv Detail & Related papers (2024-03-22T15:40:06Z) - Post-selection Inference for Conformal Prediction: Trading off Coverage
for Precision [0.0]
Traditionally, conformal prediction inference requires a data-independent specification of miscoverage level.
We develop simultaneous conformal inference to account for data-dependent miscoverage levels.
arXiv Detail & Related papers (2023-04-12T20:56:43Z) - One-Shot Federated Conformal Prediction [0.0]
We introduce a conformal prediction method to construct prediction sets in a oneshot federated learning setting.
We prove that for any distribution, it is possible to output prediction sets with desired coverage in only one round of communication.
arXiv Detail & Related papers (2023-02-13T12:46:39Z) - Conformal prediction for the design problem [72.14982816083297]
In many real-world deployments of machine learning, we use a prediction algorithm to choose what data to test next.
In such settings, there is a distinct type of distribution shift between the training and test data.
We introduce a method to quantify predictive uncertainty in such settings.
arXiv Detail & Related papers (2022-02-08T02:59:12Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Individual Calibration with Randomized Forecasting [116.2086707626651]
We show that calibration for individual samples is possible in the regression setup if the predictions are randomized.
We design a training objective to enforce individual calibration and use it to train randomized regression functions.
arXiv Detail & Related papers (2020-06-18T05:53:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.