Propose, Test, Release: Differentially private estimation with high
probability
- URL: http://arxiv.org/abs/2002.08774v1
- Date: Wed, 19 Feb 2020 01:29:05 GMT
- Title: Propose, Test, Release: Differentially private estimation with high
probability
- Authors: Victor-Emmanuel Brunel and Marco Avella-Medina
- Abstract summary: We introduce a new general version of the PTR mechanism that allows us to derive high probability error bounds for differentially private estimators.
Our algorithms provide the first statistical guarantees for differentially private estimation of the median and mean without any boundedness assumptions on the data.
- Score: 9.25177374431812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We derive concentration inequalities for differentially private median and
mean estimators building on the "Propose, Test, Release" (PTR) mechanism
introduced by Dwork and Lei (2009). We introduce a new general version of the
PTR mechanism that allows us to derive high probability error bounds for
differentially private estimators. Our algorithms provide the first statistical
guarantees for differentially private estimation of the median and mean without
any boundedness assumptions on the data, and without assuming that the target
population parameter lies in some known bounded interval. Our procedures do not
rely on any truncation of the data and provide the first sub-Gaussian high
probability bounds for differentially private median and mean estimation, for
possibly heavy tailed random variables.
Related papers
- Stratified Prediction-Powered Inference for Hybrid Language Model Evaluation [62.2436697657307]
Prediction-powered inference (PPI) is a method that improves statistical estimates based on limited human-labeled data.
We propose a method called Stratified Prediction-Powered Inference (StratPPI)
We show that the basic PPI estimates can be considerably improved by employing simple data stratification strategies.
arXiv Detail & Related papers (2024-06-06T17:37:39Z) - Resampling methods for private statistical inference [1.8110941972682346]
We consider the task of constructing confidence intervals with differential privacy.
We propose two private variants of the non-parametric bootstrap, which privately compute the median of the results of multiple "little" bootstraps run on partitions of the data.
For a fixed differential privacy parameter $epsilon$, our methods enjoy the same error rates as that of the non-private bootstrap to within logarithmic factors in the sample size $n$.
arXiv Detail & Related papers (2024-02-11T08:59:02Z) - Differentially private projection-depth-based medians [0.0]
We develop $(epsilon,delta)$-differentially private projection-depth-based medians using the propose-test-release (PTR) and exponential mechanisms.
We quantify the probability the test in PTR fails, as well as the cost of privacy via finite sample deviation bounds.
arXiv Detail & Related papers (2023-12-12T23:17:29Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Differentially Private Statistical Inference through $\beta$-Divergence
One Posterior Sampling [2.8544822698499255]
We propose a posterior sampling scheme from a generalised posterior targeting the minimisation of the $beta$-divergence between the model and the data generating process.
This provides private estimation that is generally applicable without requiring changes to the underlying model.
We show that $beta$D-Bayes produces more precise inference estimation for the same privacy guarantees.
arXiv Detail & Related papers (2023-07-11T12:00:15Z) - General Gaussian Noise Mechanisms and Their Optimality for Unbiased Mean
Estimation [58.03500081540042]
A classical approach to private mean estimation is to compute the true mean and add unbiased, but possibly correlated, Gaussian noise to it.
We show that for every input dataset, an unbiased mean estimator satisfying concentrated differential privacy introduces approximately at least as much error.
arXiv Detail & Related papers (2023-01-31T18:47:42Z) - A Bias-Accuracy-Privacy Trilemma for Statistical Estimation [16.365507345447803]
We show that no algorithm can simultaneously have low bias, low error, and low privacy loss for arbitrary distributions.
We show that unbiased mean estimation is possible under a more permissive notion of differential privacy.
arXiv Detail & Related papers (2023-01-30T23:40:20Z) - Differentially private multivariate medians [4.588028371034407]
We develop novel finite-sample performance guarantees for differentially private depth-based medians.
We show that under Cauchy marginals, the cost of heavy-tailed location estimation outweighs the cost of privacy.
arXiv Detail & Related papers (2022-10-12T17:56:04Z) - Nonparametric extensions of randomized response for private confidence sets [51.75485869914048]
This work derives methods for performing nonparametric, nonasymptotic statistical inference for population means under the constraint of local differential privacy (LDP)
We present confidence intervals (CI) and time-uniform confidence sequences (CS) for $mustar$ when only given access to the privatized data.
arXiv Detail & Related papers (2022-02-17T16:04:49Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.