Node Selection Toward Faster Convergence for Federated Learning on
Non-IID Data
- URL: http://arxiv.org/abs/2105.07066v1
- Date: Fri, 14 May 2021 20:56:09 GMT
- Title: Node Selection Toward Faster Convergence for Federated Learning on
Non-IID Data
- Authors: Hongda Wu, Ping Wang
- Abstract summary: Federated Learning (FL) is a distributed learning paradigm that enables a large number of resource-limited nodes to collaboratively train a model without data sharing.
We proposed Optimal Aggregation algorithm for better aggregation, which finds out the optimal subset of local updates of participating nodes in each global round.
We also proposed a Probabilistic Node Selection framework (FedPNS) to dynamically change the probability for each node to be selected.
- Score: 6.040848035935873
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated Learning (FL) is a distributed learning paradigm that enables a
large number of resource-limited nodes to collaboratively train a model without
data sharing. The non-independent-and-identically-distributed (non-i.i.d.) data
samples invoke discrepancy between global and local objectives, making the FL
model slow to converge. In this paper, we proposed Optimal Aggregation
algorithm for better aggregation, which finds out the optimal subset of local
updates of participating nodes in each global round, by identifying and
excluding the adverse local updates via checking the relationship between the
local gradient and the global gradient. Then, we proposed a Probabilistic Node
Selection framework (FedPNS) to dynamically change the probability for each
node to be selected based on the output of Optimal Aggregation. FedPNS can
preferentially select nodes that propel faster model convergence. The
unbiasedness of the proposed FedPNS design is illustrated and the convergence
rate improvement of FedPNS over the commonly adopted Federated Averaging
(FedAvg) algorithm is analyzed theoretically. Experimental results demonstrate
the effectiveness of FedPNS in accelerating the FL convergence rate, as
compared to FedAvg with random node selection.
Related papers
- FedEP: Tailoring Attention to Heterogeneous Data Distribution with Entropy Pooling for Decentralized Federated Learning [8.576433180938004]
This paper proposes a novel DFL aggregation algorithm, Federated Entropy Pooling (FedEP)
FedEP mitigates the client drift problem by incorporating the statistical characteristics of local distributions instead of any actual data.
Experiments have demonstrated that FedEP can achieve faster convergence and show higher test performance than state-of-the-art approaches.
arXiv Detail & Related papers (2024-10-10T07:39:15Z) - Accelerating Federated Learning by Selecting Beneficial Herd of Local Gradients [40.84399531998246]
Federated Learning (FL) is a distributed machine learning framework in communication network systems.
Non-Independent and Identically Distributed (Non-IID) data negatively affect the convergence efficiency of the global model.
We propose the BHerd strategy which selects a beneficial herd of local gradients to accelerate the convergence of the FL model.
arXiv Detail & Related papers (2024-03-25T09:16:59Z) - Rethinking Clustered Federated Learning in NOMA Enhanced Wireless
Networks [60.09912912343705]
This study explores the benefits of integrating the novel clustered federated learning (CFL) approach with non-independent and identically distributed (non-IID) datasets.
A detailed theoretical analysis of the generalization gap that measures the degree of non-IID in the data distribution is presented.
Solutions to address the challenges posed by non-IID conditions are proposed with the analysis of the properties.
arXiv Detail & Related papers (2024-03-05T17:49:09Z) - FedNAR: Federated Optimization with Normalized Annealing Regularization [54.42032094044368]
We explore the choices of weight decay and identify that weight decay value appreciably influences the convergence of existing FL algorithms.
We develop Federated optimization with Normalized Annealing Regularization (FedNAR), a plug-in that can be seamlessly integrated into any existing FL algorithms.
arXiv Detail & Related papers (2023-10-04T21:11:40Z) - FedHB: Hierarchical Bayesian Federated Learning [11.936836827864095]
We propose a novel hierarchical Bayesian approach to Federated Learning (FL)
Our model reasonably describes the generative process of clients' local data via hierarchical Bayesian modeling.
We show that our block-coordinate FL algorithm converges to an optimum of the objective at the rate of $O(sqrtt)$.
arXiv Detail & Related papers (2023-05-08T18:21:41Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - Disentangled Federated Learning for Tackling Attributes Skew via
Invariant Aggregation and Diversity Transferring [104.19414150171472]
Attributes skews the current federated learning (FL) frameworks from consistent optimization directions among the clients.
We propose disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches.
Experiments verify that DFL facilitates FL with higher performance, better interpretability, and faster convergence rate, compared with SOTA FL methods.
arXiv Detail & Related papers (2022-06-14T13:12:12Z) - Fast-Convergent Federated Learning with Adaptive Weighting [6.040848035935873]
Federated learning (FL) enables resource-constrained edge nodes to collaboratively learn a global model under the orchestration of a central server.
We propose Federated Adaptive Weighting (FedAdp) algorithm that aims to accelerate model convergence under the presence of nodes with non-IID dataset.
We show that FL training with FedAdp can reduce the number of communication rounds by up to 54.1% on MNIST dataset and up to 45.4% on FashionMNIST dataset.
arXiv Detail & Related papers (2020-12-01T17:35:05Z) - Tackling the Objective Inconsistency Problem in Heterogeneous Federated
Optimization [93.78811018928583]
This paper provides a framework to analyze the convergence of federated heterogeneous optimization algorithms.
We propose FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
arXiv Detail & Related papers (2020-07-15T05:01:23Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.