Fisher information under local differential privacy
- URL: http://arxiv.org/abs/2005.10783v1
- Date: Thu, 21 May 2020 17:05:09 GMT
- Title: Fisher information under local differential privacy
- Authors: Leighton Pate Barnes, Wei-Ning Chen, and Ayfer Ozgur
- Abstract summary: We show how Fisher information from statistical samples can scale with the privacy parameter $varepsilon$ under local differential privacy constraints.
We further apply these inequalities to sparse Bernoulli models and demonstrate privacy mechanisms and estimators with order-matching squared $ell2$ error.
- Score: 5.582101184758529
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop data processing inequalities that describe how Fisher information
from statistical samples can scale with the privacy parameter $\varepsilon$
under local differential privacy constraints. These bounds are valid under
general conditions on the distribution of the score of the statistical model,
and they elucidate under which conditions the dependence on $\varepsilon$ is
linear, quadratic, or exponential. We show how these inequalities imply order
optimal lower bounds for private estimation for both the Gaussian location
model and discrete distribution estimation for all levels of privacy
$\varepsilon>0$. We further apply these inequalities to sparse Bernoulli models
and demonstrate privacy mechanisms and estimators with order-matching squared
$\ell^2$ error.
Related papers
- Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Approximation of Pufferfish Privacy for Gaussian Priors [6.2584995033090625]
We show that $(epsilon, delta)$-pufferfish privacy is attained if the additive Laplace noise is calibrated to the differences in mean and variance of the Gaussian distributions conditioned on every discriminative secret pair.
A typical application is the private release of the summation (or average) query.
arXiv Detail & Related papers (2024-01-22T22:43:38Z) - Conditional Density Estimations from Privacy-Protected Data [0.0]
We propose simulation-based inference methods from privacy-protected datasets.
We illustrate our methods on discrete time-series data under an infectious disease model and with ordinary linear regression models.
arXiv Detail & Related papers (2023-10-19T14:34:17Z) - General Gaussian Noise Mechanisms and Their Optimality for Unbiased Mean
Estimation [58.03500081540042]
A classical approach to private mean estimation is to compute the true mean and add unbiased, but possibly correlated, Gaussian noise to it.
We show that for every input dataset, an unbiased mean estimator satisfying concentrated differential privacy introduces approximately at least as much error.
arXiv Detail & Related papers (2023-01-31T18:47:42Z) - Differentially Private Sampling from Distributions [1.452875650827562]
In some regimes, private sampling requires fewer observations than learning a description of $P$ nonprivately.
For some classes of distributions, the overhead in the number of observations needed for private learning compared to non-private learning is completely captured by the number of observations needed for private sampling.
arXiv Detail & Related papers (2022-11-15T14:56:42Z) - Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
Testing: A Lesson From Fano [83.5933307263932]
We study data reconstruction attacks for discrete data and analyze it under the framework of hypothesis testing.
We show that if the underlying private data takes values from a set of size $M$, then the target privacy parameter $epsilon$ can be $O(log M)$ before the adversary gains significant inferential power.
arXiv Detail & Related papers (2022-10-24T23:50:12Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Tight Differential Privacy Guarantees for the Shuffle Model with $k$-Randomized Response [6.260747047974035]
Most differentially private (DP) algorithms assume a third party inserts noise to queries made on datasets, or a local model where the users locally perturb their data.
The recently proposed shuffle model is an intermediate framework between the central and the local paradigms.
We perform experiments on both synthetic and real data to compare the privacy-utility trade-off of the shuffle model with that of the central one privatized.
arXiv Detail & Related papers (2022-05-18T10:44:28Z) - New Lower Bounds for Private Estimation and a Generalized Fingerprinting
Lemma [10.176795938619417]
We prove new lower bounds for statistical estimation tasks under the constraint of $(varepsilon, delta)$-differential privacy.
We show that estimating the Frobenius norm requires $Omega(d2)$ samples, and in spectral norm requires $Omega(d3/2)$ samples, both matching upper bounds up to logarithmic factors.
arXiv Detail & Related papers (2022-05-17T17:55:10Z) - Nonparametric extensions of randomized response for private confidence sets [51.75485869914048]
This work derives methods for performing nonparametric, nonasymptotic statistical inference for population means under the constraint of local differential privacy (LDP)
We present confidence intervals (CI) and time-uniform confidence sequences (CS) for $mustar$ when only given access to the privatized data.
arXiv Detail & Related papers (2022-02-17T16:04:49Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.