The Skellam Mechanism for Differentially Private Federated Learning
- URL: http://arxiv.org/abs/2110.04995v1
- Date: Mon, 11 Oct 2021 04:28:11 GMT
- Title: The Skellam Mechanism for Differentially Private Federated Learning
- Authors: Naman Agarwal and Peter Kairouz and Ziyu Liu
- Abstract summary: We introduce the multi-dimensional Skellam mechanism, a discrete differential privacy mechanism based on the difference of two independent Poisson random variables.
We analyze the privacy loss distribution via a numerical evaluation and provide a sharp bound on the R'enyi divergence between two shifted Skellam distributions.
While useful in both centralized and distributed privacy applications, we investigate how it can be applied in the context of federated learning.
- Score: 28.623090760737075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the multi-dimensional Skellam mechanism, a discrete differential
privacy mechanism based on the difference of two independent Poisson random
variables. To quantify its privacy guarantees, we analyze the privacy loss
distribution via a numerical evaluation and provide a sharp bound on the
R\'enyi divergence between two shifted Skellam distributions. While useful in
both centralized and distributed privacy applications, we investigate how it
can be applied in the context of federated learning with secure aggregation
under communication constraints. Our theoretical findings and extensive
experimental evaluations demonstrate that the Skellam mechanism provides the
same privacy-accuracy trade-offs as the continuous Gaussian mechanism, even
when the precision is low. More importantly, Skellam is closed under summation
and sampling from it only requires sampling from a Poisson distribution -- an
efficient routine that ships with all machine learning and data analysis
software packages. These features, along with its discrete nature and
competitive privacy-accuracy trade-offs, make it an attractive alternative to
the newly introduced discrete Gaussian mechanism.
Related papers
- CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness [6.881974834597426]
Federated learning (FL) has emerged as a promising framework for distributed machine learning.
We introduce CorBin-FL, a privacy mechanism that uses correlated binary quantization to achieve differential privacy.
We also propose AugCorBin-FL, an extension that, in addition to PLDP, user-level and sample-level central differential privacy guarantees.
arXiv Detail & Related papers (2024-09-20T00:23:44Z) - Universal Exact Compression of Differentially Private Mechanisms [47.57948804514929]
We introduce a novel construction, called Poisson private representation (PPR), designed to compress and simulate any local randomizer.
PPR preserves the joint distribution of the data and the output of the original local randomizer.
Experiment results show that PPR consistently gives a better trade-off between communication, accuracy, central and local differential privacy.
arXiv Detail & Related papers (2024-05-28T23:54:31Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - The Symmetric alpha-Stable Privacy Mechanism [0.0]
We present novel analysis of the Symmetric alpha-Stable (SaS) mechanism.
We prove that the mechanism is purely differentially private while remaining closed under convolution.
arXiv Detail & Related papers (2023-11-29T16:34:39Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions [9.20186865054847]
We study the i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which these mechanisms guarantee privacy.
We show how the i.n.i.d. noise can improve the performance in private (a) coordinate descent, (b) principal component analysis, and (c) deep learning with group clipping.
arXiv Detail & Related papers (2023-02-07T14:54:20Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Near Instance-Optimality in Differential Privacy [38.8726789833284]
We develop notions of instance optimality in differential privacy inspired by classical statistical theory.
We also develop inverse sensitivity mechanisms, which are instance optimal (or nearly instance optimal) for a large class of estimands.
arXiv Detail & Related papers (2020-05-16T04:53:48Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.