Coding for Straggler Mitigation in Federated Learning
- URL: http://arxiv.org/abs/2109.15226v1
- Date: Thu, 30 Sep 2021 15:53:35 GMT
- Title: Coding for Straggler Mitigation in Federated Learning
- Authors: Siddhartha Kumar, Reent Schlegel, Eirik Rosnes, Alexandre Graell i
Amat
- Abstract summary: The proposed scheme combines one-time padding to preserve privacy and gradient codes to yield resiliency against stragglers.
We show that the proposed scheme achieves a training speed-up factor of $6.6$ and $9.2$ on the MNIST and Fashion-MNIST datasets for an accuracy of $95%$ and $85%$, respectively.
- Score: 86.98177890676077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel coded federated learning (FL) scheme for linear regression
that mitigates the effect of straggling devices while retaining the privacy
level of conventional FL. The proposed scheme combines one-time padding to
preserve privacy and gradient codes to yield resiliency against stragglers and
consists of two phases. In the first phase, the devices share a one-time padded
version of their local data with a subset of other devices. In the second
phase, the devices and the central server collaboratively and iteratively train
a global linear model using gradient codes on the one-time padded local data.
To apply one-time padding to real data, our scheme exploits a fixed-point
arithmetic representation of the data. Unlike the coded FL scheme recently
introduced by Prakash et al., the proposed scheme maintains the same level of
privacy as conventional FL while achieving a similar training time. Compared to
conventional FL, we show that the proposed scheme achieves a training speed-up
factor of $6.6$ and $9.2$ on the MNIST and Fashion-MNIST datasets for an
accuracy of $95\%$ and $85\%$, respectively.
Related papers
- FLea: Addressing Data Scarcity and Label Skew in Federated Learning via Privacy-preserving Feature Augmentation [15.298650496155508]
Federated Learning (FL) enables model development by leveraging data distributed across numerous edge devices without transferring local data to a central server.
Existing FL methods face challenges when dealing with scarce and label-skewed data across devices, resulting in local model overfitting and drift.
We propose a pioneering framework called FLea, incorporating the following key components.
arXiv Detail & Related papers (2024-06-13T19:28:08Z) - Federated Learning with Reduced Information Leakage and Computation [17.069452700698047]
Federated learning (FL) is a distributed learning paradigm that allows multiple decentralized clients to collaboratively learn a common model without sharing local data.
This paper introduces Upcycled-FL, a strategy that applies first-order approximation at every even round of model update.
Under this strategy, half of the FL updates incur no information leakage and require much less computational and transmission costs.
arXiv Detail & Related papers (2023-10-10T06:22:06Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Sparse Federated Learning with Hierarchical Personalized Models [24.763028713043468]
Federated learning (FL) can achieve privacy-safe and reliable collaborative training without collecting users' private data.
We propose a personalized FL algorithm using a hierarchical proximal mapping based on the moreau envelop, named sparse federated learning with hierarchical personalized models (sFedHP)
A continuously differentiable approximated L1-norm is also used as the sparse constraint to reduce the communication cost.
arXiv Detail & Related papers (2022-03-25T09:06:42Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z) - CodedPaddedFL and CodedSecAgg: Straggler Mitigation and Secure
Aggregation in Federated Learning [86.98177890676077]
We present two novel coded federated learning (FL) schemes for linear regression that mitigate the effect of straggling devices.
The first scheme, CodedPaddedFL, mitigates the effect of straggling devices while retaining the privacy level of conventional FL.
The second scheme, CodedSecAgg, provides straggler resiliency and robustness against model inversion attacks.
arXiv Detail & Related papers (2021-12-16T14:26:30Z) - Hybrid Federated Learning: Algorithms and Implementation [61.0640216394349]
Federated learning (FL) is a recently proposed distributed machine learning paradigm dealing with distributed and private data sets.
We propose a new model-matching-based problem formulation for hybrid FL.
We then propose an efficient algorithm that can collaboratively train the global and local models to deal with full and partial featured data.
arXiv Detail & Related papers (2020-12-22T23:56:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.