SwiftAgg+: Achieving Asymptotically Optimal Communication Load in Secure
Aggregation for Federated Learning
- URL: http://arxiv.org/abs/2203.13060v1
- Date: Thu, 24 Mar 2022 13:12:23 GMT
- Title: SwiftAgg+: Achieving Asymptotically Optimal Communication Load in Secure
Aggregation for Federated Learning
- Authors: Tayyebeh Jahani-Nezhad, Mohammad Ali Maddah-Ali, Songze Li, Giuseppe
Caire
- Abstract summary: SwiftAgg+ is a novel secure aggregation protocol for federated learning systems.
A central server aggregates local models of $NinmathbbN$ distributed users, each of size $L in mathbbN$, trained on their local data, in a privacy-preserving manner.
- Score: 83.94234859890402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose SwiftAgg+, a novel secure aggregation protocol for federated
learning systems, where a central server aggregates local models of
$N\in\mathbb{N}$ distributed users, each of size $L \in \mathbb{N}$, trained on
their local data, in a privacy-preserving manner. SwiftAgg+ can significantly
reduce the communication overheads without any compromise on security, and
achieve the optimum communication load within a diminishing gap. Specifically,
in presence of at most $D$ dropout users, SwiftAgg+ achieves average per-user
communication load of $(1+\mathcal{O}(\frac{1}{N}))L$ and the server
communication load of $(1+\mathcal{O}(\frac{1}{N}))L$, with a worst-case
information-theoretic security guarantee, against any subset of up to $T$
semi-honest users who may also collude with the curious server. The proposed
SwiftAgg+ has also a flexibility to reduce the number of active communication
links at the cost of increasing the the communication load between the users
and the server. In particular, for any $K\in\mathbb{N}$, SwiftAgg+ can achieve
the uplink communication load of $(1+\frac{T}{K})L$, and per-user communication
load of up to $(1-\frac{1}{N})(1+\frac{T+D}{K})L$, where the number of
pair-wise active connections in the network is $\frac{N}{2}(K+T+D+1)$.
Related papers
- Asynchronous Approximate Agreement with Quadratic Communication [23.27199615640474]
We consider an asynchronous network of $n$ message-sending parties, up to $t$ of which are byzantine.
We study approximate agreement, where the parties obtain approximately equal outputs in the convex hull of their inputs.
This takes $Theta(n2)$ messages per reliable broadcast, or $Theta(n3)$ messages per iteration.
arXiv Detail & Related papers (2024-08-10T09:03:06Z) - Cooperative Multi-Agent Reinforcement Learning: Asynchronous
Communication and Linear Function Approximation [77.09836892653176]
We study multi-agent reinforcement learning in the setting of episodic Markov decision processes.
We propose a provably efficient algorithm based on value that enable asynchronous communication.
We show that a minimal $Omega(dM)$ communication complexity is required to improve the performance through collaboration.
arXiv Detail & Related papers (2023-05-10T20:29:29Z) - A Simple and Provably Efficient Algorithm for Asynchronous Federated
Contextual Linear Bandits [77.09836892653176]
We study federated contextual linear bandits, where $M$ agents cooperate with each other to solve a global contextual linear bandit problem with the help of a central server.
We consider the asynchronous setting, where all agents work independently and the communication between one agent and the server will not trigger other agents' communication.
We prove that the regret of textttFedLinUCB is bounded by $tildeO(dsqrtsum_m=1M T_m)$ and the communication complexity is $tildeO(dM
arXiv Detail & Related papers (2022-07-07T06:16:19Z) - SwiftAgg: Communication-Efficient and Dropout-Resistant Secure
Aggregation for Federated Learning with Worst-Case Security Guarantees [83.94234859890402]
We propose SwiftAgg, a novel secure aggregation protocol for federated learning systems.
A central server aggregates local models of $N$ distributed users, each of size $L$, trained on their local data.
SwiftAgg significantly reduces the communication overheads without any compromise on security.
arXiv Detail & Related papers (2022-02-08T22:08:56Z) - Coordinated Attacks against Contextual Bandits: Fundamental Limits and
Defense Mechanisms [75.17357040707347]
Motivated by online recommendation systems, we propose the problem of finding the optimal policy in contextual bandits.
The goal is to robustly learn the policy that maximizes rewards for good users with as few user interactions as possible.
We show we can achieve an $tildeO(min(S,A)cdot alpha/epsilon2)$ upper-bound, by employing efficient robust mean estimators.
arXiv Detail & Related papers (2022-01-30T01:45:13Z) - FedPAGE: A Fast Local Stochastic Gradient Method for
Communication-Efficient Federated Learning [21.055643409860743]
Averaging (FedAvg, also known as Local-SGD) is a classical federated learning in which clients run multiple local SGD steps before communicating their update to an orchestrating server.
We propose a new federated learning algorithm, FedAvg, able to reduce the communication complexity by utilizing the recent optimal PAGE method.
arXiv Detail & Related papers (2021-08-10T15:41:27Z) - Communication-efficient SGD: From Local SGD to One-Shot Averaging [16.00658606157781]
We consider speeding up gradient descent (SGD) by parallelizing it across multiple workers.
We suggest a Local SGD scheme that communicates less overall by communicating less frequently as the number of iterations grows.
arXiv Detail & Related papers (2021-06-09T01:10:34Z) - Information Theoretic Secure Aggregation with User Dropouts [56.39267027829569]
A server wishes to learn and only learn the sum of the inputs of a number of users while some users may drop out (i.e., may not respond)
We consider the following minimal two-round model of secure aggregation.
arXiv Detail & Related papers (2021-01-19T17:43:48Z) - Federated Bandit: A Gossiping Approach [22.595542913855624]
We study emphFederated Bandit, a decentralized Multi-Armed Bandit problem with a set of $N$ agents, who can only communicate their local data with neighbors described by a connected graph $G$.
We show that Gossip_UCB successfully adapts local bandit learning into a global gossiping process for sharing information among connected agents, and achieves guaranteed regret at the order of $O(max textttpoly(N,M) log T, textttpoly(N,M) log_lambda-1
arXiv Detail & Related papers (2020-10-24T03:44:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.