The Distributed Discrete Gaussian Mechanism for Federated Learning with
Secure Aggregation
- URL: http://arxiv.org/abs/2102.06387v1
- Date: Fri, 12 Feb 2021 08:20:18 GMT
- Title: The Distributed Discrete Gaussian Mechanism for Federated Learning with
Secure Aggregation
- Authors: Peter Kairouz and Ziyu Liu and Thomas Steinke
- Abstract summary: We present a comprehensive end-to-end system, which appropriately discretizes the data and adds discrete Gaussian noise before performing secure aggregation.
Our theoretical guarantees highlight the complex tension between communication, privacy, and accuracy.
- Score: 28.75998313625891
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider training models on private data that is distributed across user
devices. To ensure privacy, we add on-device noise and use secure aggregation
so that only the noisy sum is revealed to the server. We present a
comprehensive end-to-end system, which appropriately discretizes the data and
adds discrete Gaussian noise before performing secure aggregation. We provide a
novel privacy analysis for sums of discrete Gaussians. We also analyze the
effect of rounding the input data and the modular summation arithmetic. Our
theoretical guarantees highlight the complex tension between communication,
privacy, and accuracy. Our extensive experimental results demonstrate that our
solution is essentially able to achieve a comparable accuracy to central
differential privacy with 16 bits of precision per value.
Related papers
- Enhanced Privacy Bound for Shuffle Model with Personalized Privacy [32.08637708405314]
Differential Privacy (DP) is an enhanced privacy protocol which introduces an intermediate trusted server between local users and a central data curator.
It significantly amplifies the central DP guarantee by anonymizing and shuffling the local randomized data.
This work focuses on deriving the central privacy bound for a more practical setting where personalized local privacy is required by each user.
arXiv Detail & Related papers (2024-07-25T16:11:56Z) - The Symmetric alpha-Stable Privacy Mechanism [0.0]
We present novel analysis of the Symmetric alpha-Stable (SaS) mechanism.
We prove that the mechanism is purely differentially private while remaining closed under convolution.
arXiv Detail & Related papers (2023-11-29T16:34:39Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions [9.20186865054847]
We study the i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which these mechanisms guarantee privacy.
We show how the i.n.i.d. noise can improve the performance in private (a) coordinate descent, (b) principal component analysis, and (c) deep learning with group clipping.
arXiv Detail & Related papers (2023-02-07T14:54:20Z) - Privacy Amplification via Shuffled Check-Ins [2.3333090554192615]
We study a protocol for distributed computation called shuffled check-in.
It achieves strong privacy guarantees without requiring any further trust assumptions beyond a trusted shuffler.
We show that shuffled check-in achieves tight privacy guarantees through privacy amplification.
arXiv Detail & Related papers (2022-06-07T09:55:15Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z) - The Discrete Gaussian for Differential Privacy [26.179150185540514]
A key tool for building differentially private systems is adding Gaussian noise to the output of a function evaluated on a sensitive dataset.
Previous work has demonstrated that seemingly innocuous numerical errors can entirely destroy privacy.
We introduce and analyze the discrete Gaussian in the context of differential privacy.
arXiv Detail & Related papers (2020-03-31T18:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.