Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions
- URL: http://arxiv.org/abs/2302.03511v4
- Date: Sat, 25 Jan 2025 07:42:41 GMT
- Title: Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions
- Authors: Gokularam Muthukrishnan, Sheetal Kalyani,
- Abstract summary: In a differentially private additive noise mechanism, independent and identically distributed (i.i.d.) noise samples are added to each coordinate of the response.
We study the i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which these mechanisms guarantee privacy.
- Score: 9.20186865054847
- License:
- Abstract: Conventionally, in a differentially private additive noise mechanism, independent and identically distributed (i.i.d.) noise samples are added to each coordinate of the response. In this work, we formally present the addition of noise that is independent but not identically distributed (i.n.i.d.) across the coordinates to achieve tighter privacy-accuracy trade-off by exploiting coordinate-wise disparity in privacy leakage. In particular, we study the i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which these mechanisms guarantee privacy. The optimal choice of parameters that ensure these conditions are derived considering (weighted) mean squared and $\ell_{p}^{p}$-errors as measures of accuracy. Theoretical analyses and numerical simulations demonstrate that the i.n.i.d. mechanisms achieve higher utility for the given privacy requirements compared to their i.i.d. counterparts. One of the interesting observations is that the Laplace mechanism outperforms Gaussian even in high dimensions, as opposed to the popular belief, if the irregularity in coordinate-wise sensitivities is exploited. We also demonstrate how the i.n.i.d. noise can improve the performance in private (a) coordinate descent, (b) principal component analysis, and (c) deep learning with group clipping.
Related papers
- Differentially Private Random Block Coordinate Descent [51.62669821275571]
We propose a differentially private random coordinate descent method that selects multiple coordinates with varying probabilities in each iteration using sketch matrices.
Our algorithm generalizes both DP-CD and the classical DP-SGD (Differentially Private Descent), while preserving the same utility guarantees.
arXiv Detail & Related papers (2024-12-22T15:06:56Z) - Differentially Private Random Feature Model [52.468511541184895]
We produce a differentially private random feature model for privacy-preserving kernel machines.
We show that our method preserves privacy and derive a generalization error bound for the method.
arXiv Detail & Related papers (2024-12-06T05:31:08Z) - Universal Exact Compression of Differentially Private Mechanisms [47.57948804514929]
We introduce a novel construction, called Poisson private representation (PPR), designed to compress and simulate any local randomizer.
PPR preserves the joint distribution of the data and the output of the original local randomizer.
Experiment results show that PPR consistently gives a better trade-off between communication, accuracy, central and local differential privacy.
arXiv Detail & Related papers (2024-05-28T23:54:31Z) - On the Privacy of Selection Mechanisms with Gaussian Noise [44.577599546904736]
We revisit the analysis of Report Noisy Max and Above Threshold with Gaussian noise.
We find that it is possible to provide pure ex-ante DP bounds for Report Noisy Max and pure ex-post DP bounds for Above Threshold.
arXiv Detail & Related papers (2024-02-09T02:11:25Z) - The Symmetric alpha-Stable Privacy Mechanism [0.0]
We present novel analysis of the Symmetric alpha-Stable (SaS) mechanism.
We prove that the mechanism is purely differentially private while remaining closed under convolution.
arXiv Detail & Related papers (2023-11-29T16:34:39Z) - General Gaussian Noise Mechanisms and Their Optimality for Unbiased Mean
Estimation [58.03500081540042]
A classical approach to private mean estimation is to compute the true mean and add unbiased, but possibly correlated, Gaussian noise to it.
We show that for every input dataset, an unbiased mean estimator satisfying concentrated differential privacy introduces approximately at least as much error.
arXiv Detail & Related papers (2023-01-31T18:47:42Z) - Introducing the Huber mechanism for differentially private low-rank
matrix completion [9.944551494217075]
We propose a novel noise addition mechanism for preserving differential privacy.
The proposed Huber mechanism is evaluated against existing differential privacy mechanisms.
We prove that the proposed mechanism achieves epsilon-differential privacy similar to the Laplace mechanism.
arXiv Detail & Related papers (2022-06-16T04:33:06Z) - Optimal Algorithms for Mean Estimation under Local Differential Privacy [55.32262879188817]
We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
arXiv Detail & Related papers (2022-05-05T06:43:46Z) - A unified interpretation of the Gaussian mechanism for differential
privacy through the sensitivity index [61.675604648670095]
We argue that the three prevailing interpretations of the GM, namely $(varepsilon, delta)$-DP, f-DP and R'enyi DP can be expressed by using a single parameter $psi$, which we term the sensitivity index.
$psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.
arXiv Detail & Related papers (2021-09-22T06:20:01Z) - Learning Numeric Optimal Differentially Private Truncated Additive
Mechanisms [5.079561894598125]
We introduce a tool to learn truncated noise for additive mechanisms with strong utility bounds.
We show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
For sensitivity bounded mechanisms, we show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
arXiv Detail & Related papers (2021-07-27T17:22:57Z) - A Central Limit Theorem for Differentially Private Query Answering [23.015107368002884]
We show the product of the privacy parameter and the $ell$-loss of the mechanism is lower bounded by the dimension.
Our findings are corroborated by numerical experiments.
arXiv Detail & Related papers (2021-03-15T21:06:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.