Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions
- URL: http://arxiv.org/abs/2302.03511v3
- Date: Mon, 14 Oct 2024 15:59:02 GMT
- Title: Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions
- Authors: Gokularam Muthukrishnan, Sheetal Kalyani,
- Abstract summary: We study the i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which these mechanisms guarantee privacy.
We show how the i.n.i.d. noise can improve the performance in private (a) coordinate descent, (b) principal component analysis, and (c) deep learning with group clipping.
- Score: 9.20186865054847
- License:
- Abstract: Conventionally, in a differentially private additive noise mechanism, independent and identically distributed (i.i.d.) noise samples are added to each coordinate of the response. In this work, we formally present the addition of noise that is independent but not identically distributed (i.n.i.d.) across the coordinates to achieve tighter privacy-accuracy trade-off by exploiting coordinate-wise disparity in privacy leakage. In particular, we study the i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which these mechanisms guarantee privacy. The optimal choice of parameters that ensure these conditions are derived theoretically. Theoretical analyses and numerical simulations demonstrate that the i.n.i.d. mechanisms achieve higher utility for the given privacy requirements compared to their i.i.d. counterparts. One of the interesting observations is that the Laplace mechanism outperforms Gaussian even in high dimensions, as opposed to the popular belief, if the irregularity in coordinate-wise sensitivities is exploited. We also demonstrate how the i.n.i.d. noise can improve the performance in private (a) coordinate descent, (b) principal component analysis, and (c) deep learning with group clipping.
Related papers
- Universal Exact Compression of Differentially Private Mechanisms [47.57948804514929]
We introduce a novel construction, called Poisson private representation (PPR), designed to compress and simulate any local randomizer.
PPR preserves the joint distribution of the data and the output of the original local randomizer.
Experiment results show that PPR consistently gives a better trade-off between communication, accuracy, central and local differential privacy.
arXiv Detail & Related papers (2024-05-28T23:54:31Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - On the Privacy of Selection Mechanisms with Gaussian Noise [44.577599546904736]
We revisit the analysis of Report Noisy Max and Above Threshold with Gaussian noise.
We find that it is possible to provide pure ex-ante DP bounds for Report Noisy Max and pure ex-post DP bounds for Above Threshold.
arXiv Detail & Related papers (2024-02-09T02:11:25Z) - The Symmetric alpha-Stable Privacy Mechanism [0.0]
We present novel analysis of the Symmetric alpha-Stable (SaS) mechanism.
We prove that the mechanism is purely differentially private while remaining closed under convolution.
arXiv Detail & Related papers (2023-11-29T16:34:39Z) - Adaptive Privacy Composition for Accuracy-first Mechanisms [55.53725113597539]
Noise reduction mechanisms produce increasingly accurate answers.
Analysts only pay the privacy cost of the least noisy or most accurate answer released.
There has yet to be any study on how ex-post private mechanisms compose.
We develop privacy filters that allow an analyst to adaptively switch between differentially private and ex-post private mechanisms.
arXiv Detail & Related papers (2023-06-24T00:33:34Z) - General Gaussian Noise Mechanisms and Their Optimality for Unbiased Mean
Estimation [58.03500081540042]
A classical approach to private mean estimation is to compute the true mean and add unbiased, but possibly correlated, Gaussian noise to it.
We show that for every input dataset, an unbiased mean estimator satisfying concentrated differential privacy introduces approximately at least as much error.
arXiv Detail & Related papers (2023-01-31T18:47:42Z) - Introducing the Huber mechanism for differentially private low-rank
matrix completion [9.944551494217075]
We propose a novel noise addition mechanism for preserving differential privacy.
The proposed Huber mechanism is evaluated against existing differential privacy mechanisms.
We prove that the proposed mechanism achieves epsilon-differential privacy similar to the Laplace mechanism.
arXiv Detail & Related papers (2022-06-16T04:33:06Z) - The Skellam Mechanism for Differentially Private Federated Learning [28.623090760737075]
We introduce the multi-dimensional Skellam mechanism, a discrete differential privacy mechanism based on the difference of two independent Poisson random variables.
We analyze the privacy loss distribution via a numerical evaluation and provide a sharp bound on the R'enyi divergence between two shifted Skellam distributions.
While useful in both centralized and distributed privacy applications, we investigate how it can be applied in the context of federated learning.
arXiv Detail & Related papers (2021-10-11T04:28:11Z) - A unified interpretation of the Gaussian mechanism for differential
privacy through the sensitivity index [61.675604648670095]
We argue that the three prevailing interpretations of the GM, namely $(varepsilon, delta)$-DP, f-DP and R'enyi DP can be expressed by using a single parameter $psi$, which we term the sensitivity index.
$psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.
arXiv Detail & Related papers (2021-09-22T06:20:01Z) - Learning Numeric Optimal Differentially Private Truncated Additive
Mechanisms [5.079561894598125]
We introduce a tool to learn truncated noise for additive mechanisms with strong utility bounds.
We show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
For sensitivity bounded mechanisms, we show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
arXiv Detail & Related papers (2021-07-27T17:22:57Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.