Secure Aggregation for Buffered Asynchronous Federated Learning
- URL: http://arxiv.org/abs/2110.02177v1
- Date: Tue, 5 Oct 2021 17:07:02 GMT
- Title: Secure Aggregation for Buffered Asynchronous Federated Learning
- Authors: Jinhyun So, Ramy E. Ali, Ba\c{s}ak G\"uler, A. Salman Avestimehr
- Abstract summary: Federated learning (FL) typically relies on synchronous training, which is slow due to stragglers.
While asynchronous training handles stragglers efficiently, it does not ensure privacy due to the incompatibility with the secure aggregation protocols.
A buffered asynchronous training protocol known as FedBuff has been proposed recently which bridges the gap between synchronous and asynchronous training to mitigate stragglers and to also ensure privacy simultaneously.
- Score: 4.893896929103367
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Federated learning (FL) typically relies on synchronous training, which is
slow due to stragglers. While asynchronous training handles stragglers
efficiently, it does not ensure privacy due to the incompatibility with the
secure aggregation protocols. A buffered asynchronous training protocol known
as FedBuff has been proposed recently which bridges the gap between synchronous
and asynchronous training to mitigate stragglers and to also ensure privacy
simultaneously. FedBuff allows the users to send their updates asynchronously
while ensuring privacy by storing the updates in a trusted execution
environment (TEE) enabled private buffer. TEEs, however, have limited memory
which limits the buffer size. Motivated by this limitation, we develop a
buffered asynchronous secure aggregation (BASecAgg) protocol that does not rely
on TEEs. The conventional secure aggregation protocols cannot be applied in the
buffered asynchronous setting since the buffer may have local models
corresponding to different rounds and hence the masks that the users use to
protect their models may not cancel out. BASecAgg addresses this challenge by
carefully designing the masks such that they cancel out even if they correspond
to different rounds. Our convergence analysis and experiments show that
BASecAgg almost has the same convergence guarantees as FedBuff without relying
on TEEs.
Related papers
- LATTEO: A Framework to Support Learning Asynchronously Tempered with Trusted Execution and Obfuscation [6.691450146654845]
We propose a privacy-preserving framework that combines a gradient obfuscation mechanism with Trusted Execution Environments (TEEs) for secure asynchronous FL aggregation at the network edge.
Our mechanism enables clients to implicitly verify TEE-based aggregation services, effectively handle on-demand client participation, and scale seamlessly with an increasing number of asynchronous connections.
arXiv Detail & Related papers (2025-02-07T01:21:37Z) - A Novel Buffered Federated Learning Framework for Privacy-Driven Anomaly Detection in IIoT [11.127334284392676]
We propose a Buffered FL (BFL) framework empowered by homomorphic encryption for anomaly detection in heterogeneous IIoT environments.
BFL utilizes a novel weighted average time approach to mitigate both straggler effects and communication bottlenecks.
Results show the superiority of BFL compared to state-of-the-art FL methods, demonstrating improved accuracy and convergence speed.
arXiv Detail & Related papers (2024-08-16T13:01:59Z) - Privacy Preserving Semi-Decentralized Mean Estimation over Intermittently-Connected Networks [59.43433767253956]
We consider the problem of privately estimating the mean of vectors distributed across different nodes of an unreliable wireless network.
In a semi-decentralized setup, nodes can collaborate with their neighbors to compute a local consensus, which they relay to a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the data sharing among nodes.
arXiv Detail & Related papers (2024-06-06T06:12:15Z) - Buffered Asynchronous Secure Aggregation for Cross-Device Federated Learning [16.682175699793635]
We propose a novel secure aggregation protocol named buffered asynchronous secure aggregation (BASA)
BASA is fully compatible with AFL and provides secure aggregation under the condition that each user only needs one round of communication with the server without relying on any synchronous interaction among users.
Based on BASA, we propose the first AFL method which achieves secure aggregation without extra requirements on hardware.
arXiv Detail & Related papers (2024-06-05T16:39:32Z) - FedFa: A Fully Asynchronous Training Paradigm for Federated Learning [14.4313600357833]
Federated learning is an efficient decentralized training paradigm for scaling the machine learning model training on a large number of devices.
Recent state-of-the-art solutions propose using semi-asynchronous approaches to mitigate the waiting time cost with guaranteed convergence.
We propose a full asynchronous training paradigm, called FedFa, which can guarantee model convergence and eliminate the waiting time completely.
arXiv Detail & Related papers (2024-04-17T02:46:59Z) - Differentially Private Wireless Federated Learning Using Orthogonal
Sequences [56.52483669820023]
We propose a privacy-preserving uplink over-the-air computation (AirComp) method, termed FLORAS.
We prove that FLORAS offers both item-level and client-level differential privacy guarantees.
A new FL convergence bound is derived which, combined with the privacy guarantees, allows for a smooth tradeoff between the achieved convergence rate and differential privacy levels.
arXiv Detail & Related papers (2023-06-14T06:35:10Z) - ByzSecAgg: A Byzantine-Resistant Secure Aggregation Scheme for Federated
Learning Based on Coded Computing and Vector Commitment [90.60126724503662]
ByzSecAgg is an efficient secure aggregation scheme for federated learning.
ByzSecAgg is protected against Byzantine attacks and privacy leakages.
arXiv Detail & Related papers (2023-02-20T11:15:18Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Unbounded Gradients in Federated Leaning with Buffered Asynchronous
Aggregation [0.6526824510982799]
The textitFedBuff algorithm allows asynchronous updates while preserving privacy via secure aggregation.
This paper presents a theoretical analysis of the convergence rate of this algorithm when heterogeneity in data, batch size, and delay are considered.
arXiv Detail & Related papers (2022-10-03T18:20:48Z) - Semi-Synchronous Personalized Federated Learning over Mobile Edge
Networks [88.50555581186799]
We propose a semi-synchronous PFL algorithm, termed as Semi-Synchronous Personalized FederatedAveraging (PerFedS$2$), over mobile edge networks.
We derive an upper bound of the convergence rate of PerFedS2 in terms of the number of participants per global round and the number of rounds.
Experimental results verify the effectiveness of PerFedS2 in saving training time as well as guaranteeing the convergence of training loss.
arXiv Detail & Related papers (2022-09-27T02:12:43Z) - Federated Learning with Buffered Asynchronous Aggregation [0.7327285556439885]
Federated Learning (FL) trains a shared model across distributed devices while keeping the training data on the devices.
Most FL schemes are synchronous: they perform aggregation of model updates from individual devices.
We propose FedBuff, that combines synchronous and asynchronous FL.
arXiv Detail & Related papers (2021-06-11T23:29:48Z) - Harnessing Wireless Channels for Scalable and Privacy-Preserving
Federated Learning [56.94644428312295]
Wireless connectivity is instrumental in enabling federated learning (FL)
Channel randomnessperturbs each worker inversions model update while multiple workers updates incur significant interference on bandwidth.
In A-FADMM, all workers upload their model updates to the parameter server using a single channel via analog transmissions.
This not only saves communication bandwidth, but also hides each worker's exact model update trajectory from any eavesdropper.
arXiv Detail & Related papers (2020-07-03T16:31:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.