Differential Privacy of Dirichlet Posterior Sampling
- URL: http://arxiv.org/abs/2110.01984v1
- Date: Sun, 3 Oct 2021 07:41:19 GMT
- Title: Differential Privacy of Dirichlet Posterior Sampling
- Authors: Donlapark Ponnoprat
- Abstract summary: We study the inherent privacy of releasing a single draw from a Dirichlet posterior distribution.
With the notion of truncated concentrated differential privacy (tCDP), we are able to derive a simple privacy guarantee of the Dirichlet posterior sampling.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Besides the Laplace distribution and the Gaussian distribution, there are
many more probability distributions which is not well-understood in terms of
privacy-preserving property of a random draw -- one of which is the Dirichlet
distribution. In this work, we study the inherent privacy of releasing a single
draw from a Dirichlet posterior distribution. As a complement to the previous
study that provides general theories on the differential privacy of posterior
sampling from exponential families, this study focuses specifically on the
Dirichlet posterior sampling and its privacy guarantees. With the notion of
truncated concentrated differential privacy (tCDP), we are able to derive a
simple privacy guarantee of the Dirichlet posterior sampling, which effectively
allows us to analyze its utility in various settings. Specifically, we prove
accuracy guarantees of private Multinomial-Dirichlet sampling, which is
prevalent in Bayesian tasks, and private release of a normalized histogram. In
addition, with our results, it is possible to make Bayesian reinforcement
learning differentially private by modifying the Dirichlet sampling for state
transition probabilities.
Related papers
- Enhanced Privacy Bound for Shuffle Model with Personalized Privacy [32.08637708405314]
Differential Privacy (DP) is an enhanced privacy protocol which introduces an intermediate trusted server between local users and a central data curator.
It significantly amplifies the central DP guarantee by anonymizing and shuffling the local randomized data.
This work focuses on deriving the central privacy bound for a more practical setting where personalized local privacy is required by each user.
arXiv Detail & Related papers (2024-07-25T16:11:56Z) - Bayesian Inference Under Differential Privacy: Prior Selection Considerations with Application to Univariate Gaussian Data and Regression [0.3683202928838613]
We show that analysts can take constraints imposed by the bounds into account when specifying prior distributions.
We provide theoretical and empirical results regarding what classes of default priors produce valid inference for a differentially private release.
arXiv Detail & Related papers (2024-05-22T16:27:20Z) - Uncertainty Quantification via Stable Distribution Propagation [60.065272548502]
We propose a new approach for propagating stable probability distributions through neural networks.
Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity.
arXiv Detail & Related papers (2024-02-13T09:40:19Z) - A Bias-Variance-Privacy Trilemma for Statistical Estimation [19.548528664406874]
We prove that no algorithm can simultaneously have low bias, low variance, and low privacy loss for arbitrary distributions.
We show that unbiased mean estimation is possible under approximate differential privacy if we assume that the distribution is symmetric.
arXiv Detail & Related papers (2023-01-30T23:40:20Z) - Generalised Likelihood Ratio Testing Adversaries through the
Differential Privacy Lens [69.10072367807095]
Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries.
We relax the assumption of a Neyman--Pearson (NPO) adversary to a Generalized Likelihood Test (GLRT) adversary.
This mild relaxation leads to improved privacy guarantees.
arXiv Detail & Related papers (2022-10-24T08:24:10Z) - Optimal Algorithms for Mean Estimation under Local Differential Privacy [55.32262879188817]
We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
arXiv Detail & Related papers (2022-05-05T06:43:46Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z) - Successive Refinement of Privacy [38.20887036580742]
This work examines how much randomness is needed to achieve local differential privacy (LDP)
A motivating scenario is providing multiple levels of privacy to multiple analysts, either for distribution or for heavy-hitter estimation.
We show that we cannot reuse random keys over time while preserving privacy of each user.
arXiv Detail & Related papers (2020-05-24T04:16:01Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z) - Propose, Test, Release: Differentially private estimation with high
probability [9.25177374431812]
We introduce a new general version of the PTR mechanism that allows us to derive high probability error bounds for differentially private estimators.
Our algorithms provide the first statistical guarantees for differentially private estimation of the median and mean without any boundedness assumptions on the data.
arXiv Detail & Related papers (2020-02-19T01:29:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.