DP-NormFedAvg: Normalizing Client Updates for Privacy-Preserving
Federated Learning
- URL: http://arxiv.org/abs/2106.07094v1
- Date: Sun, 13 Jun 2021 21:23:46 GMT
- Title: DP-NormFedAvg: Normalizing Client Updates for Privacy-Preserving
Federated Learning
- Authors: Rudrajit Das, Abolfazl Hashemi, Sujay Sanghavi, Inderjit S. Dhillon
- Abstract summary: We propose to have the clients send a textitfin quantized version of only the textitunit in terms of magnitude information.
We also introduce QTDL, a new differentially private quantization mechanism for unitnorm.
- Score: 48.064786028195506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we focus on facilitating differentially private quantized
communication between the clients and server in federated learning (FL).
Towards this end, we propose to have the clients send a \textit{private
quantized} version of only the \textit{unit vector} along the change in their
local parameters to the server, \textit{completely throwing away the magnitude
information}. We call this algorithm \texttt{DP-NormFedAvg} and show that it
has the same order-wise convergence rate as \texttt{FedAvg} on smooth
quasar-convex functions (an important class of non-convex functions for
modeling optimization of deep neural networks), thereby establishing that
discarding the magnitude information is not detrimental from an optimization
point of view. We also introduce QTDL, a new differentially private
quantization mechanism for unit-norm vectors, which we use in
\texttt{DP-NormFedAvg}. QTDL employs \textit{discrete} noise having a
Laplacian-like distribution on a \textit{finite support} to provide privacy. We
show that under a growth-condition assumption on the per-sample client losses,
the extra per-coordinate communication cost in each round incurred due to
privacy by our method is $\mathcal{O}(1)$ with respect to the model dimension,
which is an improvement over prior work. Finally, we show the efficacy of our
proposed method with experiments on fully-connected neural networks trained on
CIFAR-10 and Fashion-MNIST.
Related papers
- FedScalar: A Communication efficient Federated Learning [0.0]
Federated learning (FL) has gained considerable popularity for distributed machine learning.
emphFedScalar enables agents to communicate updates using a single scalar.
arXiv Detail & Related papers (2024-10-03T07:06:49Z) - One-Shot Federated Learning with Bayesian Pseudocoresets [19.53527340816458]
We show that distributed function-space inference is tightly related to learning Bayesian pseudocoresets.
We show that this approach achieves prediction performance competitive to state-of-the-art while showing a striking reduction in communication cost of up to two orders of magnitude.
arXiv Detail & Related papers (2024-06-04T10:14:39Z) - Fed-CVLC: Compressing Federated Learning Communications with
Variable-Length Codes [54.18186259484828]
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds.
We show strong evidences that variable-length is beneficial for compression in FL.
We present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response to the dynamics of model updates.
arXiv Detail & Related papers (2024-02-06T07:25:21Z) - Share Your Representation Only: Guaranteed Improvement of the
Privacy-Utility Tradeoff in Federated Learning [47.042811490685324]
Mitigating the risk of this information leakage, using state of the art differentially private algorithms, also does not come for free.
In this paper, we consider a representation learning objective that various parties collaboratively refine on a federated model, with differential privacy guarantees.
We observe a significant performance improvement over the prior work under the same small privacy budget.
arXiv Detail & Related papers (2023-09-11T14:46:55Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Federated Gaussian Process: Convergence, Automatic Personalization and
Multi-fidelity Modeling [4.18804572788063]
We show that textttFGPR is a promising approach for privacy-preserving multi-fidelity data modeling.
We show that textttFGPR excels in a wide range of applications and is a promising approach for privacy-preserving multi-fidelity data modeling.
arXiv Detail & Related papers (2021-11-28T00:17:31Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Faster Non-Convex Federated Learning via Global and Local Momentum [57.52663209739171]
textttFedGLOMO is the first (first-order) FLtexttFedGLOMO algorithm.
Our algorithm is provably optimal even with communication between the clients and the server.
arXiv Detail & Related papers (2020-12-07T21:05:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.