Randomized Quantization is All You Need for Differential Privacy in
Federated Learning
- URL: http://arxiv.org/abs/2306.11913v1
- Date: Tue, 20 Jun 2023 21:54:13 GMT
- Title: Randomized Quantization is All You Need for Differential Privacy in
Federated Learning
- Authors: Yeojoon Youn, Zihao Hu, Juba Ziani, Jacob Abernethy
- Abstract summary: We consider an approach to federated learning that combines quantization and differential privacy.
We develop a new algorithm called the textbfRandomized textbfQuantization textbfMechanism (RQM)
We empirically study the performance of our algorithm and demonstrate that compared to previous work it yields improved privacy-accuracy trade-offs.
- Score: 1.9785872350085876
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a common and practical framework for learning a
machine model in a decentralized fashion. A primary motivation behind this
decentralized approach is data privacy, ensuring that the learner never sees
the data of each local source itself. Federated learning then comes with two
majors challenges: one is handling potentially complex model updates between a
server and a large number of data sources; the other is that de-centralization
may, in fact, be insufficient for privacy, as the local updates themselves can
reveal information about the sources' data. To address these issues, we
consider an approach to federated learning that combines quantization and
differential privacy. Absent privacy, Federated Learning often relies on
quantization to reduce communication complexity. We build upon this approach
and develop a new algorithm called the \textbf{R}andomized
\textbf{Q}uantization \textbf{M}echanism (RQM), which obtains privacy through a
two-levels of randomization. More precisely, we randomly sub-sample feasible
quantization levels, then employ a randomized rounding procedure using these
sub-sampled discrete levels. We are able to establish that our results preserve
``Renyi differential privacy'' (Renyi DP). We empirically study the performance
of our algorithm and demonstrate that compared to previous work it yields
improved privacy-accuracy trade-offs for DP federated learning. To the best of
our knowledge, this is the first study that solely relies on randomized
quantization without incorporating explicit discrete noise to achieve Renyi DP
guarantees in Federated Learning systems.
Related papers
- QMGeo: Differentially Private Federated Learning via Stochastic Quantization with Mixed Truncated Geometric Distribution [1.565361244756411]
Federated learning (FL) is a framework which allows multiple users to jointly train a global machine learning (ML) model.
One key motivation of such distributed frameworks is to provide privacy guarantees to the users.
We present a novel quantization method, utilizing a mixed geometric distribution to introduce the randomness needed to provide DP.
arXiv Detail & Related papers (2023-12-10T04:44:53Z) - Fusion of Global and Local Knowledge for Personalized Federated Learning [75.20751492913892]
In this paper, we explore personalized models with low-rank and sparse decomposition.
We propose a two-stage-based algorithm named textbfFederated learning with mixed textbfSparse and textbfRank representation.
Under proper assumptions, we show that the GKR trained by FedSLR can at least sub-linearly converge to a stationary point of the regularized problem.
arXiv Detail & Related papers (2023-02-21T23:09:45Z) - On Differential Privacy for Federated Learning in Wireless Systems with
Multiple Base Stations [90.53293906751747]
We consider a federated learning model in a wireless system with multiple base stations and inter-cell interference.
We show the convergence behavior of the learning process by deriving an upper bound on its optimality gap.
Our proposed scheduler improves the average accuracy of the predictions compared with a random scheduler.
arXiv Detail & Related papers (2022-08-25T03:37:11Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Renyi Differential Privacy of the Subsampled Shuffle Model in
Distributed Learning [7.197592390105457]
We study privacy in a distributed learning framework, where clients collaboratively build a learning model iteratively through interactions with a server from whom we need privacy.
Motivated by optimization and the federated learning (FL) paradigm, we focus on the case where a small fraction of data samples are randomly sub-sampled in each round.
To obtain even stronger local privacy guarantees, we study this in the shuffle privacy model, where each client randomizes its response using a local differentially private (LDP) mechanism.
arXiv Detail & Related papers (2021-07-19T11:43:24Z) - A Graph Federated Architecture with Privacy Preserving Learning [48.24121036612076]
Federated learning involves a central processor that works with multiple agents to find a global model.
The current architecture of a server connected to multiple clients is highly sensitive to communication failures and computational overloads at the server.
We use cryptographic and differential privacy concepts to privatize the federated learning algorithm that we extend to the graph structure.
arXiv Detail & Related papers (2021-04-26T09:51:24Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - LDP-FL: Practical Private Aggregation in Federated Learning with Local
Differential Privacy [20.95527613004989]
Federated learning is a popular approach for privacy protection that collects the local gradient information instead of real data.
Previous works do not give a practical solution due to three issues.
Last, the privacy budget explodes due to the high dimensionality of weights in deep learning models.
arXiv Detail & Related papers (2020-07-31T01:08:57Z) - Privacy Amplification via Random Check-Ins [38.72327434015975]
Differentially Private Gradient Descent (DP-SGD) forms a fundamental building block in many applications for learning over sensitive data.
In this paper, we focus on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed among many devices (clients)
Our main contribution is the emphrandom check-in distributed protocol, which crucially relies only on randomized participation decisions made locally and independently by each client.
arXiv Detail & Related papers (2020-07-13T18:14:09Z) - Differentially private cross-silo federated learning [16.38610531397378]
Strict privacy is of paramount importance in distributed machine learning.
In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.
We demonstrate that our proposed solutions give prediction accuracy that is comparable to the non-distributed setting.
arXiv Detail & Related papers (2020-07-10T18:15:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.