Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism can Beat Gaussian in High Dimensions
- URL: http://arxiv.org/abs/2302.03511v2
- Date: Sat, 30 Mar 2024 13:30:11 GMT
- Title: Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism can Beat Gaussian in High Dimensions
- Authors: Gokularam Muthukrishnan, Sheetal Kalyani,
- Abstract summary: We study the i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which these mechanisms guarantee privacy.
We show how the i.n.i.d. noise can improve the performance in the private empirical risk minimization through coordinate descent.
- Score: 9.20186865054847
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Conventionally, in a differentially private additive noise mechanism, independent and identically distributed (i.i.d.) noise samples are added to each coordinate of the response. In this work, we formally present the addition of noise which is independent, but not identically distributed (i.n.i.d.) across the coordinates to achieve tighter privacy-accuracy trade-off by exploiting coordinate-wise disparity. In particular, we study the i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which these mechanisms guarantee privacy. The optimal choice of parameters that ensure these conditions are derived theoretically. Theoretical analyses and numerical simulations demonstrate that the i.n.i.d. mechanisms achieve higher utility for the given privacy requirements compared to their i.i.d. counterparts. One of the interesting observations is that the Laplace mechanism outperforms Gaussian even in high dimensions, as opposed to the popular belief, if the irregularity in coordinate-wise sensitivities is exploited. We also demonstrate how the i.n.i.d. noise can improve the performance in the private empirical risk minimization through coordinate descent.
Related papers
- Universal Exact Compression of Differentially Private Mechanisms [47.57948804514929]
We introduce a novel construction, called Poisson private representation (PPR), designed to compress and simulate any local randomizer.
PPR preserves the joint distribution of the data and the output of the original local randomizer.
Experiment results show that PPR consistently gives a better trade-off between communication, accuracy, central and local differential privacy.
arXiv Detail & Related papers (2024-05-28T23:54:31Z) - On the Privacy of Selection Mechanisms with Gaussian Noise [44.577599546904736]
We revisit the analysis of Report Noisy Max and Above Threshold with Gaussian noise.
We find that it is possible to provide pure ex-ante DP bounds for Report Noisy Max and pure ex-post DP bounds for Above Threshold.
arXiv Detail & Related papers (2024-02-09T02:11:25Z) - Adaptive Privacy Composition for Accuracy-first Mechanisms [55.53725113597539]
Noise reduction mechanisms produce increasingly accurate answers.
Analysts only pay the privacy cost of the least noisy or most accurate answer released.
There has yet to be any study on how ex-post private mechanisms compose.
We develop privacy filters that allow an analyst to adaptively switch between differentially private and ex-post private mechanisms.
arXiv Detail & Related papers (2023-06-24T00:33:34Z) - Less is More: Revisiting the Gaussian Mechanism for Differential Privacy [8.89234867625102]
Differential privacy via output perturbation has been a de facto standard for releasing query or computation results on sensitive data.
We identify that all existing Gaussian mechanisms suffer from the curse of full-rank covariance matrices.
arXiv Detail & Related papers (2023-06-04T04:14:38Z) - General Gaussian Noise Mechanisms and Their Optimality for Unbiased Mean
Estimation [58.03500081540042]
A classical approach to private mean estimation is to compute the true mean and add unbiased, but possibly correlated, Gaussian noise to it.
We show that for every input dataset, an unbiased mean estimator satisfying concentrated differential privacy introduces approximately at least as much error.
arXiv Detail & Related papers (2023-01-31T18:47:42Z) - Introducing the Huber mechanism for differentially private low-rank
matrix completion [9.944551494217075]
We propose a novel noise addition mechanism for preserving differential privacy.
The proposed Huber mechanism is evaluated against existing differential privacy mechanisms.
We prove that the proposed mechanism achieves epsilon-differential privacy similar to the Laplace mechanism.
arXiv Detail & Related papers (2022-06-16T04:33:06Z) - Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy
Constraints [53.01656650117495]
There is a disconnect between how researchers and practitioners handle privacy-utility tradeoffs.
Brownian mechanism works by first adding Gaussian noise of high variance corresponding to the final point of a simulated Brownian motion.
We complement our Brownian mechanism with ReducedAboveThreshold, a generalization of the classical AboveThreshold algorithm.
arXiv Detail & Related papers (2022-06-15T01:43:37Z) - Optimal Algorithms for Mean Estimation under Local Differential Privacy [55.32262879188817]
We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
arXiv Detail & Related papers (2022-05-05T06:43:46Z) - The Skellam Mechanism for Differentially Private Federated Learning [28.623090760737075]
We introduce the multi-dimensional Skellam mechanism, a discrete differential privacy mechanism based on the difference of two independent Poisson random variables.
We analyze the privacy loss distribution via a numerical evaluation and provide a sharp bound on the R'enyi divergence between two shifted Skellam distributions.
While useful in both centralized and distributed privacy applications, we investigate how it can be applied in the context of federated learning.
arXiv Detail & Related papers (2021-10-11T04:28:11Z) - Learning Numeric Optimal Differentially Private Truncated Additive
Mechanisms [5.079561894598125]
We introduce a tool to learn truncated noise for additive mechanisms with strong utility bounds.
We show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
For sensitivity bounded mechanisms, we show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
arXiv Detail & Related papers (2021-07-27T17:22:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.