SwiftAgg: Communication-Efficient and Dropout-Resistant Secure
Aggregation for Federated Learning with Worst-Case Security Guarantees
- URL: http://arxiv.org/abs/2202.04169v1
- Date: Tue, 8 Feb 2022 22:08:56 GMT
- Title: SwiftAgg: Communication-Efficient and Dropout-Resistant Secure
Aggregation for Federated Learning with Worst-Case Security Guarantees
- Authors: Tayyebeh Jahani-Nezhad, Mohammad Ali Maddah-Ali, Songze Li, Giuseppe
Caire
- Abstract summary: We propose SwiftAgg, a novel secure aggregation protocol for federated learning systems.
A central server aggregates local models of $N$ distributed users, each of size $L$, trained on their local data.
SwiftAgg significantly reduces the communication overheads without any compromise on security.
- Score: 83.94234859890402
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We propose SwiftAgg, a novel secure aggregation protocol for federated
learning systems, where a central server aggregates local models of $N$
distributed users, each of size $L$, trained on their local data, in a
privacy-preserving manner. Compared with state-of-the-art secure aggregation
protocols, SwiftAgg significantly reduces the communication overheads without
any compromise on security. Specifically, in presence of at most $D$ dropout
users, SwiftAgg achieves a users-to-server communication load of $(T+1)L$ and a
users-to-users communication load of up to $(N-1)(T+D+1)L$, with a worst-case
information-theoretic security guarantee, against any subset of up to $T$
semi-honest users who may also collude with the curious server. The key idea of
SwiftAgg is to partition the users into groups of size $D+T+1$, then in the
first phase, secret sharing and aggregation of the individual models are
performed within each group, and then in the second phase, model aggregation is
performed on $D+T+1$ sequences of users across the groups. If a user in a
sequence drops out in the second phase, the rest of the sequence remain silent.
This design allows only a subset of users to communicate with each other, and
only the users in a single group to directly communicate with the server,
eliminating the requirements of 1) all-to-all communication network across
users; and 2) all users communicating with the server, for other secure
aggregation protocols. This helps to substantially slash the communication
costs of the system.
Related papers
- $\mathsf{OPA}$: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning [6.977111770337479]
We introduce One-shot Private Aggregation ($mathsfOPA$) where clients speak only once (or even choose not to speak) per aggregation evaluation.
Since each client communicates only once per aggregation, this simplifies managing dropouts and dynamic participation.
$mathsfOPA$ is practical, outperforming state-of-the-art solutions.
arXiv Detail & Related papers (2024-10-29T17:50:11Z) - Federated Contextual Cascading Bandits with Asynchronous Communication
and Heterogeneous Users [95.77678166036561]
We propose a UCB-type algorithm with delicate communication protocols.
We give sub-linear regret bounds on par with those achieved in the synchronous framework.
Empirical evaluation on synthetic and real-world datasets validates our algorithm's superior performance in terms of regrets and communication costs.
arXiv Detail & Related papers (2024-02-26T05:31:14Z) - A Simple and Provably Efficient Algorithm for Asynchronous Federated
Contextual Linear Bandits [77.09836892653176]
We study federated contextual linear bandits, where $M$ agents cooperate with each other to solve a global contextual linear bandit problem with the help of a central server.
We consider the asynchronous setting, where all agents work independently and the communication between one agent and the server will not trigger other agents' communication.
We prove that the regret of textttFedLinUCB is bounded by $tildeO(dsqrtsum_m=1M T_m)$ and the communication complexity is $tildeO(dM
arXiv Detail & Related papers (2022-07-07T06:16:19Z) - SwiftAgg+: Achieving Asymptotically Optimal Communication Load in Secure
Aggregation for Federated Learning [83.94234859890402]
SwiftAgg+ is a novel secure aggregation protocol for federated learning systems.
A central server aggregates local models of $NinmathbbN$ distributed users, each of size $L in mathbbN$, trained on their local data, in a privacy-preserving manner.
arXiv Detail & Related papers (2022-03-24T13:12:23Z) - Coordinated Attacks against Contextual Bandits: Fundamental Limits and
Defense Mechanisms [75.17357040707347]
Motivated by online recommendation systems, we propose the problem of finding the optimal policy in contextual bandits.
The goal is to robustly learn the policy that maximizes rewards for good users with as few user interactions as possible.
We show we can achieve an $tildeO(min(S,A)cdot alpha/epsilon2)$ upper-bound, by employing efficient robust mean estimators.
arXiv Detail & Related papers (2022-01-30T01:45:13Z) - Eluding Secure Aggregation in Federated Learning via Model Inconsistency [2.647302105102753]
Federated learning allows a set of users to train a deep neural network over their private training datasets.
We show that a malicious server can easily elude secure aggregation as if the latter were not in place.
We devise two different attacks capable of inferring information on individual private training datasets.
arXiv Detail & Related papers (2021-11-14T16:09:11Z) - Information Theoretic Secure Aggregation with User Dropouts [56.39267027829569]
A server wishes to learn and only learn the sum of the inputs of a number of users while some users may drop out (i.e., may not respond)
We consider the following minimal two-round model of secure aggregation.
arXiv Detail & Related papers (2021-01-19T17:43:48Z) - Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure
Federated Learning [2.294014185517203]
A major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users.
In this paper, we propose the first secure aggregation framework, named Turbo-Aggregate, that achieves a secure aggregation overhead of $O(NlogN)$.
We experimentally demonstrate that Turbo-Aggregate achieves a total running time that grows almost linear in the number of users, and provides up to $40times$ speedup over the state-of-the-art protocols with up to $N=200$ users.
arXiv Detail & Related papers (2020-02-11T01:15:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.