Differential Privacy of Dirichlet Posterior Sampling
- URL: http://arxiv.org/abs/2110.01984v1
- Date: Sun, 3 Oct 2021 07:41:19 GMT
- Title: Differential Privacy of Dirichlet Posterior Sampling
- Authors: Donlapark Ponnoprat
- Abstract summary: We study the inherent privacy of releasing a single draw from a Dirichlet posterior distribution.
With the notion of truncated concentrated differential privacy (tCDP), we are able to derive a simple privacy guarantee of the Dirichlet posterior sampling.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Besides the Laplace distribution and the Gaussian distribution, there are
many more probability distributions which is not well-understood in terms of
privacy-preserving property of a random draw -- one of which is the Dirichlet
distribution. In this work, we study the inherent privacy of releasing a single
draw from a Dirichlet posterior distribution. As a complement to the previous
study that provides general theories on the differential privacy of posterior
sampling from exponential families, this study focuses specifically on the
Dirichlet posterior sampling and its privacy guarantees. With the notion of
truncated concentrated differential privacy (tCDP), we are able to derive a
simple privacy guarantee of the Dirichlet posterior sampling, which effectively
allows us to analyze its utility in various settings. Specifically, we prove
accuracy guarantees of private Multinomial-Dirichlet sampling, which is
prevalent in Bayesian tasks, and private release of a normalized histogram. In
addition, with our results, it is possible to make Bayesian reinforcement
learning differentially private by modifying the Dirichlet sampling for state
transition probabilities.
Related papers
- Enhanced Privacy Bound for Shuffle Model with Personalized Privacy [32.08637708405314]
Differential Privacy (DP) is an enhanced privacy protocol which introduces an intermediate trusted server between local users and a central data curator.
It significantly amplifies the central DP guarantee by anonymizing and shuffling the local randomized data.
This work focuses on deriving the central privacy bound for a more practical setting where personalized local privacy is required by each user.
arXiv Detail & Related papers (2024-07-25T16:11:56Z) - Uncertainty Quantification via Stable Distribution Propagation [60.065272548502]
We propose a new approach for propagating stable probability distributions through neural networks.
Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity.
arXiv Detail & Related papers (2024-02-13T09:40:19Z) - Approximation of Pufferfish Privacy for Gaussian Priors [6.2584995033090625]
We show that $(epsilon, delta)$-pufferfish privacy is attained if the additive Laplace noise is calibrated to the differences in mean and variance of the Gaussian distributions conditioned on every discriminative secret pair.
A typical application is the private release of the summation (or average) query.
arXiv Detail & Related papers (2024-01-22T22:43:38Z) - Posterior-Variance-Based Error Quantification for Inverse Problems in Imaging [8.510101522152231]
The proposed method employs estimates of the posterior variance together with techniques from conformal prediction.
The coverage guarantees can also be obtained in case only approximate sampling from the posterior is possible.
Experiments with multiple regularization approaches presented in the paper confirm that in practice, the obtained error bounds are rather tight.
arXiv Detail & Related papers (2022-12-23T17:45:38Z) - Generalised Likelihood Ratio Testing Adversaries through the
Differential Privacy Lens [69.10072367807095]
Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries.
We relax the assumption of a Neyman--Pearson (NPO) adversary to a Generalized Likelihood Test (GLRT) adversary.
This mild relaxation leads to improved privacy guarantees.
arXiv Detail & Related papers (2022-10-24T08:24:10Z) - Optimal Algorithms for Mean Estimation under Local Differential Privacy [55.32262879188817]
We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
arXiv Detail & Related papers (2022-05-05T06:43:46Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z) - Successive Refinement of Privacy [38.20887036580742]
This work examines how much randomness is needed to achieve local differential privacy (LDP)
A motivating scenario is providing multiple levels of privacy to multiple analysts, either for distribution or for heavy-hitter estimation.
We show that we cannot reuse random keys over time while preserving privacy of each user.
arXiv Detail & Related papers (2020-05-24T04:16:01Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z) - Propose, Test, Release: Differentially private estimation with high
probability [9.25177374431812]
We introduce a new general version of the PTR mechanism that allows us to derive high probability error bounds for differentially private estimators.
Our algorithms provide the first statistical guarantees for differentially private estimation of the median and mean without any boundedness assumptions on the data.
arXiv Detail & Related papers (2020-02-19T01:29:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.