OledFL: Unleashing the Potential of Decentralized Federated Learning via Opposite Lookahead Enhancement
- URL: http://arxiv.org/abs/2410.06482v1
- Date: Wed, 9 Oct 2024 02:16:14 GMT
- Title: OledFL: Unleashing the Potential of Decentralized Federated Learning via Opposite Lookahead Enhancement
- Authors: Qinglun Li, Miao Zhang, Mengzhu Wang, Quanjun Yin, Li Shen,
- Abstract summary: Decentralized Federated Learning (DFL) surpasses Federated Learning (CFL) in terms of faster training, privacy preservation, and light communication.
However, DFL still exhibits significant disparities with CFL in terms of generalization ability.
- Score: 21.440625995788974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized Federated Learning (DFL) surpasses Centralized Federated Learning (CFL) in terms of faster training, privacy preservation, and light communication, making it a promising alternative in the field of federated learning. However, DFL still exhibits significant disparities with CFL in terms of generalization ability such as rarely theoretical understanding and degraded empirical performance due to severe inconsistency. In this paper, we enhance the consistency of DFL by developing an opposite lookahead enhancement technique (Ole), yielding OledFL to optimize the initialization of each client in each communication round, thus significantly improving both the generalization and convergence speed. Moreover, we rigorously establish its convergence rate in non-convex setting and characterize its generalization bound through uniform stability, which provides concrete reasons why OledFL can achieve both the fast convergence speed and high generalization ability. Extensive experiments conducted on the CIFAR10 and CIFAR100 datasets with Dirichlet and Pathological distributions illustrate that our OledFL can achieve up to 5\% performance improvement and 8$\times$ speedup, compared to the most popular DFedAvg optimizer in DFL.
Related papers
- FuseFL: One-Shot Federated Learning through the Lens of Causality with Progressive Model Fusion [48.90879664138855]
One-shot Federated Learning (OFL) significantly reduces communication costs in FL by aggregating trained models only once.
However, the performance of advanced OFL methods is far behind the normal FL.
We propose a novel learning approach to endow OFL with superb performance and low communication and storage costs, termed as FuseFL.
arXiv Detail & Related papers (2024-10-27T09:07:10Z) - UniFL: Improve Stable Diffusion via Unified Feedback Learning [51.18278664629821]
We present UniFL, a unified framework that leverages feedback learning to enhance diffusion models comprehensively.
UniFL incorporates three key components: perceptual feedback learning, which enhances visual quality; decoupled feedback learning, which improves aesthetic appeal; and adversarial feedback learning, which optimize inference speed.
In-depth experiments and extensive user studies validate the superior performance of our proposed method in enhancing both the quality of generated models and their acceleration.
arXiv Detail & Related papers (2024-04-08T15:14:20Z) - Convergence Analysis of Sequential Federated Learning on Heterogeneous Data [5.872735527071425]
There are two categories of methods in Federated Learning (FL) for joint training across multiple clients: i) parallel FL (PFL), where clients train models in a parallel manner; and ii) FL (SFL) where clients train in a sequential manner.
In this paper, we establish the convergence guarantees SFL on heterogeneous data is still lacking.
Experimental results validate the counterintuitive analysis result that SFL outperforms PFL on extremely heterogeneous data in cross-device settings.
arXiv Detail & Related papers (2023-11-06T14:48:51Z) - Towards Understanding Generalization and Stability Gaps between Centralized and Decentralized Federated Learning [57.35402286842029]
We show that centralized learning always generalizes better than decentralized learning (DFL)
We also conduct experiments on several common setups in FL to validate that our theoretical analysis is consistent with experimental phenomena and contextually valid in several general and practical scenarios.
arXiv Detail & Related papers (2023-10-05T11:09:42Z) - Communication Resources Constrained Hierarchical Federated Learning for
End-to-End Autonomous Driving [67.78611905156808]
This paper proposes an optimization-based Communication Resource Constrained Hierarchical Federated Learning framework.
Results show that the proposed CRCHFL both accelerates the convergence rate and enhances the generalization of federated learning autonomous driving model.
arXiv Detail & Related papers (2023-06-28T12:44:59Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - Accelerating Federated Learning with a Global Biased Optimiser [16.69005478209394]
Federated Learning (FL) is a recent development in the field of machine learning that collaboratively trains models without the training data leaving client devices.
We propose a novel, generalised approach for applying adaptive optimisation techniques to FL with the Federated Global Biased Optimiser (FedGBO) algorithm.
FedGBO accelerates FL by applying a set of global biased optimiser values during the local training phase of FL, which helps to reduce client-drift' from non-IID data.
arXiv Detail & Related papers (2021-08-20T12:08:44Z) - Decentralized Federated Learning: Balancing Communication and Computing
Costs [21.694468026280806]
Decentralized federated learning (DFL) is a powerful framework of distributed machine learning.
We propose a general decentralized federated learning framework to strike a balance between communication-efficiency and convergence performance.
Experiment results based on MNIST and CIFAR-10 datasets illustrate the superiority of DFL over traditional decentralized SGD methods.
arXiv Detail & Related papers (2021-07-26T09:09:45Z) - Federated Learning with Nesterov Accelerated Gradient Momentum Method [47.49442006239034]
Federated learning (FL) is a fast-developing technique that allows multiple workers to train a global model based on a distributed dataset.
It is well known that Nesterov Accelerated Gradient (NAG) is more advantageous in centralized training environment.
In this work, we focus on a version of FL based on NAG and provide a detailed convergence analysis.
arXiv Detail & Related papers (2020-09-18T09:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.