FedPower: Privacy-Preserving Distributed Eigenspace Estimation
- URL: http://arxiv.org/abs/2103.00704v2
- Date: Tue, 27 Jun 2023 04:18:06 GMT
- Title: FedPower: Privacy-Preserving Distributed Eigenspace Estimation
- Authors: Xiao Guo and Xiang Li and Xiangyu Chang and Shusen Wang and Zhihua
Zhang
- Abstract summary: We propose a class of algorithms called textsfFedPower within the federated learning (FL) framework.
textsfFedPower leverages the well-known power method by alternating multiple local power iterations and a global aggregation step.
We provide convergence bounds for textsfFedPower that are composed of different interpretable terms corresponding to the effects of Gaussian noise, parallelization, and random sampling of local machines.
- Score: 38.17106202734679
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Eigenspace estimation is fundamental in machine learning and statistics,
which has found applications in PCA, dimension reduction, and clustering, among
others. The modern machine learning community usually assumes that data come
from and belong to different organizations. The low communication power and the
possible privacy breaches of data make the computation of eigenspace
challenging. To address these challenges, we propose a class of algorithms
called \textsf{FedPower} within the federated learning (FL) framework.
\textsf{FedPower} leverages the well-known power method by alternating multiple
local power iterations and a global aggregation step, thus improving
communication efficiency. In the aggregation, we propose to weight each local
eigenvector matrix with {\it Orthogonal Procrustes Transformation} (OPT) for
better alignment. To ensure strong privacy protection, we add Gaussian noise in
each iteration by adopting the notion of \emph{differential privacy} (DP). We
provide convergence bounds for \textsf{FedPower} that are composed of different
interpretable terms corresponding to the effects of Gaussian noise,
parallelization, and random sampling of local machines. Additionally, we
conduct experiments to demonstrate the effectiveness of our proposed
algorithms.
Related papers
- Private Networked Federated Learning for Nonsmooth Objectives [7.278228169713637]
This paper develops a networked federated learning algorithm to solve nonsmooth objective functions.
We use the zero-concentrated differential privacy notion (zCDP) to guarantee the confidentiality of the participants.
We provide complete theoretical proof for the privacy guarantees and the algorithm's convergence to the exact solution.
arXiv Detail & Related papers (2023-06-24T16:13:28Z) - Randomized Quantization is All You Need for Differential Privacy in
Federated Learning [1.9785872350085876]
We consider an approach to federated learning that combines quantization and differential privacy.
We develop a new algorithm called the textbfRandomized textbfQuantization textbfMechanism (RQM)
We empirically study the performance of our algorithm and demonstrate that compared to previous work it yields improved privacy-accuracy trade-offs.
arXiv Detail & Related papers (2023-06-20T21:54:13Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - On Differential Privacy for Federated Learning in Wireless Systems with
Multiple Base Stations [90.53293906751747]
We consider a federated learning model in a wireless system with multiple base stations and inter-cell interference.
We show the convergence behavior of the learning process by deriving an upper bound on its optimality gap.
Our proposed scheduler improves the average accuracy of the predictions compared with a random scheduler.
arXiv Detail & Related papers (2022-08-25T03:37:11Z) - Differentially Private Federated Learning via Inexact ADMM with Multiple
Local Updates [0.0]
We develop a DP inexact alternating direction method of multipliers algorithm with multiple local updates for federated learning.
We show that our algorithm provides $barepsilon$-DP for every iteration, where $barepsilon$ is a privacy budget controlled by the user.
We demonstrate that our algorithm reduces the testing error by at most $31%$ compared with the existing DP algorithm, while achieving the same level of data privacy.
arXiv Detail & Related papers (2022-02-18T19:58:47Z) - A Newton-type algorithm for federated learning based on incremental
Hessian eigenvector sharing [5.404315085380945]
We present an original communication-constrained Newton-type (NT) algorithm designed to accelerate Federated Learning (FL)
The proposed solution is thoroughly validated on real datasets.
arXiv Detail & Related papers (2022-02-11T17:52:56Z) - Learning Optimal Antenna Tilt Control Policies: A Contextual Linear
Bandit Approach [65.27783264330711]
Controlling antenna tilts in cellular networks is imperative to reach an efficient trade-off between network coverage and capacity.
We devise algorithms learning optimal tilt control policies from existing data.
We show that they can produce optimal tilt update policy using much fewer data samples than naive or existing rule-based learning algorithms.
arXiv Detail & Related papers (2022-01-06T18:24:30Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.