Truncated Laplace and Gaussian mechanisms of RDP
- URL: http://arxiv.org/abs/2309.12647v1
- Date: Fri, 22 Sep 2023 06:37:45 GMT
- Title: Truncated Laplace and Gaussian mechanisms of RDP
- Authors: Jie Fu, Zhiyu Sun, Haitao Liu, Zhili Chen,
- Abstract summary: The Laplace mechanism and the Gaussian mechanism are primary mechanisms in differential privacy.
Due to the infinite-range random variables they generate, the Laplace and Gaussian mechanisms may return values that are semantically impossible, such as negative numbers.
- Score: 28.227024132603123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Laplace mechanism and the Gaussian mechanism are primary mechanisms in differential privacy, widely applicable to many scenarios involving numerical data. However, due to the infinite-range random variables they generate, the Laplace and Gaussian mechanisms may return values that are semantically impossible, such as negative numbers. To address this issue, we have designed the truncated Laplace mechanism and Gaussian mechanism. For a given truncation interval [a, b], the truncated Gaussian mechanism ensures the same Renyi Differential Privacy (RDP) as the untruncated mechanism, regardless of the values chosen for the truncation interval [a, b]. Similarly, the truncated Laplace mechanism, for specified interval [a, b], maintains the same RDP as the untruncated mechanism. We provide the RDP expressions for each of them. We believe that our study can further enhance the utility of differential privacy in specific applications.
Related papers
- Universal Exact Compression of Differentially Private Mechanisms [47.57948804514929]
We introduce a novel construction, called Poisson private representation (PPR), designed to compress and simulate any local randomizer.
PPR preserves the joint distribution of the data and the output of the original local randomizer.
Experiment results show that PPR consistently gives a better trade-off between communication, accuracy, central and local differential privacy.
arXiv Detail & Related papers (2024-05-28T23:54:31Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Communication-Efficient Laplace Mechanism for Differential Privacy via Random Quantization [24.98836880652668]
We propose the first method that realizes the Laplace mechanism exactly (i.e., a Laplace noise is added to the data)
Our mechanism can serve as a drop-in replacement for local or centralized differential privacy applications.
arXiv Detail & Related papers (2023-09-13T14:17:54Z) - Bounding data reconstruction attacks with the hypothesis testing
interpretation of differential privacy [78.32404878825845]
Reconstruction Robustness (ReRo) was recently proposed as an upper bound on the success of data reconstruction attacks against machine learning models.
Previous research has demonstrated that differential privacy (DP) mechanisms also provide ReRo, but so far, only Monte Carlo estimates of a tight ReRo bound have been shown.
arXiv Detail & Related papers (2023-07-08T08:02:47Z) - Less is More: Revisiting the Gaussian Mechanism for Differential Privacy [8.89234867625102]
Differential privacy via output perturbation has been a de facto standard for releasing query or computation results on sensitive data.
We identify that all existing Gaussian mechanisms suffer from the curse of full-rank covariance matrices.
arXiv Detail & Related papers (2023-06-04T04:14:38Z) - General Gaussian Noise Mechanisms and Their Optimality for Unbiased Mean
Estimation [58.03500081540042]
A classical approach to private mean estimation is to compute the true mean and add unbiased, but possibly correlated, Gaussian noise to it.
We show that for every input dataset, an unbiased mean estimator satisfying concentrated differential privacy introduces approximately at least as much error.
arXiv Detail & Related papers (2023-01-31T18:47:42Z) - Faster Privacy Accounting via Evolving Discretization [54.32252900997422]
We introduce a new algorithm for numerical composition of privacy random variables.
Our algorithm achieves a running time and memory usage of $mathrmpolylog(k)$ for the task of self-composing a mechanism.
arXiv Detail & Related papers (2022-07-10T04:25:37Z) - Cactus Mechanisms: Optimal Differential Privacy Mechanisms in the
Large-Composition Regime [12.420941209631742]
We study the design of optimal differential privacy mechanisms in the limit of a large number of compositions.
In this regime the best privacy mechanism is the one that minimizes the Kullback-Leibler divergence.
We show that our quantization approach can be arbitrarily close to an optimal mechanism.
arXiv Detail & Related papers (2022-06-25T20:05:50Z) - Shuffle Gaussian Mechanism for Differential Privacy [2.7564955518050693]
We study the mechanism's R'enyi differential privacy (RDP), showing that it is of the form: $$ epsilon(lambda) leq frac1lambda-1logleft(frace-da/2sigma2ndasum_substackk_+dotsc+k_n=lambda;k_nlambda!k_nlambda!k_nlambda!k_nlambda!
arXiv Detail & Related papers (2022-06-20T04:54:16Z) - Introducing the Huber mechanism for differentially private low-rank
matrix completion [9.944551494217075]
We propose a novel noise addition mechanism for preserving differential privacy.
The proposed Huber mechanism is evaluated against existing differential privacy mechanisms.
We prove that the proposed mechanism achieves epsilon-differential privacy similar to the Laplace mechanism.
arXiv Detail & Related papers (2022-06-16T04:33:06Z) - A unified interpretation of the Gaussian mechanism for differential
privacy through the sensitivity index [61.675604648670095]
We argue that the three prevailing interpretations of the GM, namely $(varepsilon, delta)$-DP, f-DP and R'enyi DP can be expressed by using a single parameter $psi$, which we term the sensitivity index.
$psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.
arXiv Detail & Related papers (2021-09-22T06:20:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.