Multi-Message Shuffled Privacy in Federated Learning
- URL: http://arxiv.org/abs/2302.11152v1
- Date: Wed, 22 Feb 2023 05:23:52 GMT
- Title: Multi-Message Shuffled Privacy in Federated Learning
- Authors: Antonious M. Girgis and Suhas Diggavi
- Abstract summary: We study differentially private distributed optimization under communication constraints.
A server using SGD for optimization aggregates the client-side local gradients for model updates using distributed mean estimation (DME)
We develop a communication-efficient private DME, using the recently developed multi-message shuffled (MMS) privacy framework.
- Score: 2.6778110563115542
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study differentially private distributed optimization under communication
constraints. A server using SGD for optimization aggregates the client-side
local gradients for model updates using distributed mean estimation (DME). We
develop a communication-efficient private DME, using the recently developed
multi-message shuffled (MMS) privacy framework. We analyze our proposed DME
scheme to show that it achieves the order-optimal
privacy-communication-performance tradeoff resolving an open question in [1],
whether the shuffled models can improve the tradeoff obtained in Secure
Aggregation. This also resolves an open question on the optimal trade-off for
private vector sum in the MMS model. We achieve it through a novel privacy
mechanism that non-uniformly allocates privacy at different resolutions of the
local gradient vectors. These results are directly applied to give guarantees
on private distributed learning algorithms using this for private gradient
aggregation iteratively. We also numerically evaluate the private DME
algorithms.
Related papers
- DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning using Packed Secret Sharing [51.336015600778396]
Federated Learning (FL) has gained lots of traction recently, both in industry and academia.
In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds.
Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model.
arXiv Detail & Related papers (2024-10-21T16:25:14Z) - Adaptively Private Next-Token Prediction of Large Language Models [13.297381972044558]
We introduce a noisy screening mechanism that filters out queries with potentially expensive privacy loss.
AdaPMixED can reduce the privacy loss by 16x while preserving the utility over the original PMixED.
arXiv Detail & Related papers (2024-10-02T20:34:24Z) - Differentially Private Next-Token Prediction of Large Language Models [13.297381972044558]
DP-SGD, which trains a model to guarantee Differential Privacy, overestimates an adversary's capabilities in having white box access to the model.
We present PMixED: a private prediction protocol for next-token prediction that utilizes the inherentity of next-token sampling and a public model to achieve Differential Privacy.
Our results show that PMixED achieves a stronger privacy guarantee than sample-level privacy and outperforms DP-SGD for privacy $epsilon = 8$ on large-scale datasets.
arXiv Detail & Related papers (2024-03-22T22:27:44Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Balancing Privacy and Performance for Private Federated Learning
Algorithms [4.681076651230371]
Federated learning (FL) is a distributed machine learning framework where multiple clients collaborate to train a model without exposing their private data.
FL algorithms frequently employ a differential privacy mechanism that introduces noise into each client's model updates before sharing.
We show that an optimal balance exists between the number of local steps and communication rounds, one that maximizes the convergence performance within a given privacy budget.
arXiv Detail & Related papers (2023-04-11T10:42:11Z) - From Noisy Fixed-Point Iterations to Private ADMM for Centralized and
Federated Learning [4.202534541804858]
We study differentially private (DP) machine learning algorithms as instances of noisy fixed-point iterations.
We establish strong privacy guarantees leveraging privacy amplification by and by subsampling.
We provide utility guarantees using a unified analysis that exploits a recent linear convergence result for noisy fixed-point iterations.
arXiv Detail & Related papers (2023-02-24T10:24:03Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - On Differential Privacy for Federated Learning in Wireless Systems with
Multiple Base Stations [90.53293906751747]
We consider a federated learning model in a wireless system with multiple base stations and inter-cell interference.
We show the convergence behavior of the learning process by deriving an upper bound on its optimality gap.
Our proposed scheduler improves the average accuracy of the predictions compared with a random scheduler.
arXiv Detail & Related papers (2022-08-25T03:37:11Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Towards Plausible Differentially Private ADMM Based Distributed Machine
Learning [27.730535587906168]
We propose a novel (Improved) Plausible differentially Private ADMM algorithm, called PP-ADMM and IPP-ADMM.
Under the same privacy guarantee, the proposed algorithms are superior to the state of the art in terms of model accuracy and convergence rate.
arXiv Detail & Related papers (2020-08-11T03:40:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.