Communication-Efficient Laplace Mechanism for Differential Privacy via Random Quantization
- URL: http://arxiv.org/abs/2309.06982v1
- Date: Wed, 13 Sep 2023 14:17:54 GMT
- Title: Communication-Efficient Laplace Mechanism for Differential Privacy via Random Quantization
- Authors: Ali Moradi Shahmiri, Chih Wei Ling, Cheuk Ting Li,
- Abstract summary: We propose the first method that realizes the Laplace mechanism exactly (i.e., a Laplace noise is added to the data)
Our mechanism can serve as a drop-in replacement for local or centralized differential privacy applications.
- Score: 24.98836880652668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose the first method that realizes the Laplace mechanism exactly (i.e., a Laplace noise is added to the data) that requires only a finite amount of communication (whereas the original Laplace mechanism requires the transmission of a real number) while guaranteeing privacy against the server and database. Our mechanism can serve as a drop-in replacement for local or centralized differential privacy applications where the Laplace mechanism is used. Our mechanism is constructed using a random quantization technique. Unlike the simple and prevalent Laplace-mechanism-then-quantize approach, the quantization in our mechanism does not result in any distortion or degradation of utility. Unlike existing dithered quantization and channel simulation schemes for simulating additive Laplacian noise, our mechanism guarantees privacy not only against the database and downstream, but also against the honest but curious server which attempts to decode the data using the dither signals.
Related papers
- DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning using Packed Secret Sharing [51.336015600778396]
Federated Learning (FL) has gained lots of traction recently, both in industry and academia.
In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds.
Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model.
arXiv Detail & Related papers (2024-10-21T16:25:14Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - The Symmetric alpha-Stable Privacy Mechanism [0.0]
We present novel analysis of the Symmetric alpha-Stable (SaS) mechanism.
We prove that the mechanism is purely differentially private while remaining closed under convolution.
arXiv Detail & Related papers (2023-11-29T16:34:39Z) - Truncated Laplace and Gaussian mechanisms of RDP [28.227024132603123]
The Laplace mechanism and the Gaussian mechanism are primary mechanisms in differential privacy.
Due to the infinite-range random variables they generate, the Laplace and Gaussian mechanisms may return values that are semantically impossible, such as negative numbers.
arXiv Detail & Related papers (2023-09-22T06:37:45Z) - Adaptive Privacy Composition for Accuracy-first Mechanisms [55.53725113597539]
Noise reduction mechanisms produce increasingly accurate answers.
Analysts only pay the privacy cost of the least noisy or most accurate answer released.
There has yet to be any study on how ex-post private mechanisms compose.
We develop privacy filters that allow an analyst to adaptively switch between differentially private and ex-post private mechanisms.
arXiv Detail & Related papers (2023-06-24T00:33:34Z) - Summary Statistic Privacy in Data Sharing [23.50797952699759]
We study a setting where a data holder wishes to share data with a receiver, without revealing certain summary statistics of the data distribution.
We propose summary statistic privacy, a metric for quantifying the privacy risk of such a mechanism.
We show that the proposed quantization mechanisms achieve better privacy-distortion tradeoffs than alternative privacy mechanisms.
arXiv Detail & Related papers (2023-03-03T15:29:19Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Introducing the Huber mechanism for differentially private low-rank
matrix completion [9.944551494217075]
We propose a novel noise addition mechanism for preserving differential privacy.
The proposed Huber mechanism is evaluated against existing differential privacy mechanisms.
We prove that the proposed mechanism achieves epsilon-differential privacy similar to the Laplace mechanism.
arXiv Detail & Related papers (2022-06-16T04:33:06Z) - Additive Logistic Mechanism for Privacy-Preserving Self-Supervised
Learning [26.783944764936994]
We study the privacy risks that are associated with training a neural network's weights with self-supervised learning algorithms.
We design a post-training privacy-protection algorithm that adds noise to the fine-tuned weights.
We show that the proposed protection algorithm can effectively reduce the attack accuracy to roughly 50%-equivalent to random guessing.
arXiv Detail & Related papers (2022-05-25T01:33:52Z) - Noisy Quantum Kernel Machines [58.09028887465797]
An emerging class of quantum learning machines is that based on the paradigm of quantum kernels.
We study how dissipation and decoherence affect their performance.
We show that decoherence and dissipation can be seen as an implicit regularization for the quantum kernel machines.
arXiv Detail & Related papers (2022-04-26T09:52:02Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.