Semi-Synchronous Federated Learning
- URL: http://arxiv.org/abs/2102.02849v1
- Date: Thu, 4 Feb 2021 19:33:35 GMT
- Title: Semi-Synchronous Federated Learning
- Authors: Dimitris Stripelis and Jose Luis Ambite
- Abstract summary: We introduce a novel Semi-Synchronous Federated Learning protocol that mixes local models periodically with minimal idle time and fast convergence.
We show through extensive experiments that our approach significantly outperforms previous work in data and computationally heterogeneous environments.
- Score: 1.1168121941015012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are situations where data relevant to a machine learning problem are
distributed among multiple locations that cannot share the data due to
regulatory, competitiveness, or privacy reasons. For example, data present in
users' cellphones, manufacturing data of companies in a given industrial
sector, or medical records located at different hospitals. Federated Learning
(FL) provides an approach to learn a joint model over all the available data
across silos. In many cases, participating sites have different data
distributions and computational capabilities. In these heterogeneous
environments previous approaches exhibit poor performance: synchronous FL
protocols are communication efficient, but have slow learning convergence;
conversely, asynchronous FL protocols have faster convergence, but at a higher
communication cost. Here we introduce a novel Semi-Synchronous Federated
Learning protocol that mixes local models periodically with minimal idle time
and fast convergence. We show through extensive experiments that our approach
significantly outperforms previous work in data and computationally
heterogeneous environments.
Related papers
- Federated Impression for Learning with Distributed Heterogeneous Data [19.50235109938016]
Federated learning (FL) provides a paradigm that can learn from distributed datasets across clients without requiring them to share data.
In FL, sub-optimal convergence is common among data from different health centers due to the variety in data collection protocols and patient demographics across centers.
We propose FedImpres which alleviates catastrophic forgetting by restoring synthetic data that represents the global information as federated impression.
arXiv Detail & Related papers (2024-09-11T15:37:52Z) - Lightweight Industrial Cohorted Federated Learning for Heterogeneous Assets [0.0]
Federated Learning (FL) is the most widely adopted collaborative learning approach for training decentralized Machine Learning (ML) models.
However, since great data similarity or homogeneity is taken for granted in all FL tasks, FL is still not specifically designed for the industrial setting.
We propose a Lightweight Industrial Cohorted FL (LICFL) algorithm that uses model parameters for cohorting without any additional on-edge (clientlevel) computations and communications.
arXiv Detail & Related papers (2024-07-25T12:48:56Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - HFedMS: Heterogeneous Federated Learning with Memorable Data Semantics
in Industrial Metaverse [49.1501082763252]
This paper presents HFEDMS for incorporating practical FL into the emerging Industrial Metaverse.
It reduces data heterogeneity through dynamic grouping and training mode conversion.
Then, it compensates for the forgotten knowledge by fusing compressed historical data semantics.
Experiments have been conducted on the streamed non-i.i.d. FEMNIST dataset using 368 simulated devices.
arXiv Detail & Related papers (2022-11-07T04:33:24Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Towards Heterogeneous Clients with Elastic Federated Learning [45.2715985913761]
Federated learning involves training machine learning models over devices or data silos, such as edge processors or data warehouses, while keeping the data local.
We propose Elastic Federated Learning (EFL), an unbiased algorithm to tackle the heterogeneity in the system.
It is an efficient and effective algorithm that compresses both upstream and downstream communications.
arXiv Detail & Related papers (2021-06-17T12:30:40Z) - Scaling Neuroscience Research using Federated Learning [1.2234742322758416]
Machine learning approaches that require data to be copied to a single location are hampered by the challenges of data sharing.
Federated Learning is a promising approach to learn a joint model over data silos.
This architecture does not share any subject data across sites, only aggregated parameters, often in encrypted environments.
arXiv Detail & Related papers (2021-02-16T20:30:04Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Accelerating Federated Learning in Heterogeneous Data and Computational
Environments [0.7106986689736825]
We introduce a novel distributed validation weighting scheme (DVW), which evaluates the performance of a learner in the federation against a distributed validation set.
We empirically show that DVW results in better performance compared to established methods, such as FedAvg.
arXiv Detail & Related papers (2020-08-25T21:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.