CodedPaddedFL and CodedSecAgg: Straggler Mitigation and Secure
Aggregation in Federated Learning
- URL: http://arxiv.org/abs/2112.08909v1
- Date: Thu, 16 Dec 2021 14:26:30 GMT
- Title: CodedPaddedFL and CodedSecAgg: Straggler Mitigation and Secure
Aggregation in Federated Learning
- Authors: Reent Schlegel, Siddhartha Kumar, Eirik Rosnes, Alexandre Graell i
Amat
- Abstract summary: We present two novel coded federated learning (FL) schemes for linear regression that mitigate the effect of straggling devices.
The first scheme, CodedPaddedFL, mitigates the effect of straggling devices while retaining the privacy level of conventional FL.
The second scheme, CodedSecAgg, provides straggler resiliency and robustness against model inversion attacks.
- Score: 86.98177890676077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present two novel coded federated learning (FL) schemes for linear
regression that mitigate the effect of straggling devices. The first scheme,
CodedPaddedFL, mitigates the effect of straggling devices while retaining the
privacy level of conventional FL. Particularly, it combines one-time padding
for user data privacy with gradient codes to yield resiliency against
straggling devices. To apply one-time padding to real data, our scheme exploits
a fixed-point arithmetic representation of the data. For a scenario with 25
devices, CodedPaddedFL achieves a speed-up factor of 6.6 and 9.2 for an
accuracy of 95\% and 85\% on the MMIST and Fashion-MNIST datasets,
respectively, compared to conventional FL. Furthermore, it yields similar
performance in terms of latency compared to a recently proposed scheme by
Prakash \emph{et al.} without the shortcoming of additional leakage of private
data. The second scheme, CodedSecAgg, provides straggler resiliency and
robustness against model inversion attacks and is based on Shamir's secret
sharing. CodedSecAgg outperforms state-of-the-art secure aggregation schemes
such as LightSecAgg by a speed-up factor of 6.6--14.6, depending on the number
of colluding devices, on the MNIST dataset for a scenario with 120 devices, at
the expense of a 30\% increase in latency compared to CodedPaddedFL.
Related papers
- A Novel Buffered Federated Learning Framework for Privacy-Driven Anomaly Detection in IIoT [11.127334284392676]
We propose a Buffered FL (BFL) framework empowered by homomorphic encryption for anomaly detection in heterogeneous IIoT environments.
BFL utilizes a novel weighted average time approach to mitigate both straggler effects and communication bottlenecks.
Results show the superiority of BFL compared to state-of-the-art FL methods, demonstrating improved accuracy and convergence speed.
arXiv Detail & Related papers (2024-08-16T13:01:59Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - ScionFL: Efficient and Robust Secure Quantized Aggregation [36.668162197302365]
We introduce ScionFL, the first secure aggregation framework for federated learning.
It operates efficiently on quantized inputs and simultaneously provides robustness against malicious clients.
We show that with no overhead for clients and moderate overhead for the server, we obtain comparable accuracy for standard FL benchmarks.
arXiv Detail & Related papers (2022-10-13T21:46:55Z) - Semi-Synchronous Personalized Federated Learning over Mobile Edge
Networks [88.50555581186799]
We propose a semi-synchronous PFL algorithm, termed as Semi-Synchronous Personalized FederatedAveraging (PerFedS$2$), over mobile edge networks.
We derive an upper bound of the convergence rate of PerFedS2 in terms of the number of participants per global round and the number of rounds.
Experimental results verify the effectiveness of PerFedS2 in saving training time as well as guaranteeing the convergence of training loss.
arXiv Detail & Related papers (2022-09-27T02:12:43Z) - Federated Learning for Energy-limited Wireless Networks: A Partial Model
Aggregation Approach [79.59560136273917]
limited communication resources, bandwidth and energy, and data heterogeneity across devices are main bottlenecks for federated learning (FL)
We first devise a novel FL framework with partial model aggregation (PMA)
The proposed PMA-FL improves 2.72% and 11.6% accuracy on two typical heterogeneous datasets.
arXiv Detail & Related papers (2022-04-20T19:09:52Z) - Sparse Federated Learning with Hierarchical Personalized Models [24.763028713043468]
Federated learning (FL) can achieve privacy-safe and reliable collaborative training without collecting users' private data.
We propose a personalized FL algorithm using a hierarchical proximal mapping based on the moreau envelop, named sparse federated learning with hierarchical personalized models (sFedHP)
A continuously differentiable approximated L1-norm is also used as the sparse constraint to reduce the communication cost.
arXiv Detail & Related papers (2022-03-25T09:06:42Z) - Coding for Straggler Mitigation in Federated Learning [86.98177890676077]
The proposed scheme combines one-time padding to preserve privacy and gradient codes to yield resiliency against stragglers.
We show that the proposed scheme achieves a training speed-up factor of $6.6$ and $9.2$ on the MNIST and Fashion-MNIST datasets for an accuracy of $95%$ and $85%$, respectively.
arXiv Detail & Related papers (2021-09-30T15:53:35Z) - Fast Federated Learning in the Presence of Arbitrary Device
Unavailability [26.368873771739715]
Federated Learning (FL) coordinates heterogeneous devices to collaboratively train a shared model while preserving user privacy.
One challenge arises when devices drop out of the training process beyond the central server.
We propose Im Federated Apatientaging (MIFA) to solve this problem.
arXiv Detail & Related papers (2021-06-08T07:46:31Z) - XOR Mixup: Privacy-Preserving Data Augmentation for One-Shot Federated
Learning [49.130350799077114]
We develop a privacy-preserving XOR based mixup data augmentation technique, coined XorMixup.
The core idea is to collect other devices' encoded data samples that are decoded only using each device's own data samples.
XorMixFL achieves up to 17.6% higher accuracy than Vanilla FL under a non-IID MNIST dataset.
arXiv Detail & Related papers (2020-06-09T09:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.