Asynchronous Hierarchical Federated Learning
- URL: http://arxiv.org/abs/2206.00054v1
- Date: Tue, 31 May 2022 18:42:29 GMT
- Title: Asynchronous Hierarchical Federated Learning
- Authors: Xing Wang, Yijun Wang
- Abstract summary: Asynchronous hierarchical federated learning is proposed to solve problems of heavy server traffic, long periods of convergence, and unreliable accuracy.
A special aggregator device is selected to enable hierarchical learning, so that the burden of the server can be significantly reduced.
We evaluate the proposed algorithm on CIFAR-10 image classification task.
- Score: 10.332084068006345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning is a rapidly growing area of research and with various
benefits and industry applications. Typical federated patterns have some
intrinsic issues such as heavy server traffic, long periods of convergence, and
unreliable accuracy. In this paper, we address these issues by proposing
asynchronous hierarchical federated learning, in which the central server uses
either the network topology or some clustering algorithm to assign clusters for
workers (i.e., client devices). In each cluster, a special aggregator device is
selected to enable hierarchical learning, leads to efficient communication
between server and workers, so that the burden of the server can be
significantly reduced. In addition, asynchronous federated learning schema is
used to tolerate heterogeneity of the system and achieve fast convergence,
i.e., the server aggregates the gradients from the workers weighted by a
staleness parameter to update the global model, and regularized stochastic
gradient descent is performed in workers, so that the instability of
asynchronous learning can be alleviated. We evaluate the proposed algorithm on
CIFAR-10 image classification task, the experimental results demonstrate the
effectiveness of asynchronous hierarchical federated learning.
Related papers
- Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration [66.43954501171292]
We introduce Catalyst Acceleration and propose an acceleration Decentralized Federated Learning algorithm called DFedCata.
DFedCata consists of two main components: the Moreau envelope function, which addresses parameter inconsistencies, and Nesterov's extrapolation step, which accelerates the aggregation phase.
Empirically, we demonstrate the advantages of the proposed algorithm in both convergence speed and generalization performance on CIFAR10/100 with various non-iid data distributions.
arXiv Detail & Related papers (2024-10-09T06:17:16Z) - Federated Learning based on Pruning and Recovery [0.0]
This framework integrates asynchronous learning algorithms and pruning techniques.
It addresses the inefficiencies of traditional federated learning algorithms in scenarios involving heterogeneous devices.
It also tackles the staleness issue and inadequate training of certain clients in asynchronous algorithms.
arXiv Detail & Related papers (2024-03-16T14:35:03Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Queuing dynamics of asynchronous Federated Learning [15.26212962081762]
We study asynchronous federated learning mechanisms with nodes having potentially different computational speeds.
We propose a non-uniform sampling scheme for the central server that allows for lower delays with better complexity.
Our experiments clearly show a significant improvement of our method over current state-of-the-art asynchronous algorithms on an image classification problem.
arXiv Detail & Related papers (2024-02-12T18:32:35Z) - Scheduling and Communication Schemes for Decentralized Federated
Learning [0.31410859223862103]
A decentralized federated learning (DFL) model with the gradient descent (SGD) algorithm has been introduced.
Three scheduling policies for DFL have been proposed for communications between the clients and the parallel servers.
Results show that the proposed scheduling polices have an impact both on the speed of convergence and in the final global model.
arXiv Detail & Related papers (2023-11-27T17:35:28Z) - FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous
Client Devices using a Computing Power Aware Scheduler [5.550660753625296]
Cross-silo federated learning offers a promising solution to collaboratively train AI models without compromising privacy of local datasets.
In this paper, we introduce an innovative semi-aware Fedasynchronous federated learning algorithm with a computing power scheduler on the server side.
We demonstrate that Fed achieves faster convergence and accuracy than other algorithms when performing federated learning on higher clients.
arXiv Detail & Related papers (2023-09-26T05:03:13Z) - Personalized Decentralized Multi-Task Learning Over Dynamic
Communication Graphs [59.96266198512243]
We propose a decentralized and federated learning algorithm for tasks that are positively and negatively correlated.
Our algorithm uses gradients to calculate the correlations among tasks automatically, and dynamically adjusts the communication graph to connect mutually beneficial tasks and isolate those that may negatively impact each other.
We conduct experiments on a synthetic Gaussian dataset and a large-scale celebrity attributes (CelebA) dataset.
arXiv Detail & Related papers (2022-12-21T18:58:24Z) - Hierarchical Over-the-Air FedGradNorm [50.756991828015316]
Multi-task learning (MTL) is a learning paradigm to learn multiple related tasks simultaneously with a single shared network.
We propose hierarchical over-the-air (HOTA) PFL with a dynamic weighting strategy which we call HOTA-FedGradNorm.
arXiv Detail & Related papers (2022-12-14T18:54:46Z) - Locally Asynchronous Stochastic Gradient Descent for Decentralised Deep
Learning [0.0]
Local Asynchronous SGD (LASGD) is an asynchronous decentralized algorithm that relies on All Reduce for model synchronization.
We empirically validate LASGD's performance on image classification tasks on the ImageNet dataset.
arXiv Detail & Related papers (2022-03-24T14:25:15Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - A Low Complexity Decentralized Neural Net with Centralized Equivalence
using Layer-wise Learning [49.15799302636519]
We design a low complexity decentralized learning algorithm to train a recently proposed large neural network in distributed processing nodes (workers)
In our setup, the training data is distributed among the workers but is not shared in the training process due to privacy and security concerns.
We show that it is possible to achieve equivalent learning performance as if the data is available in a single place.
arXiv Detail & Related papers (2020-09-29T13:08:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.