Towards Understanding Generalization and Stability Gaps between Centralized and Decentralized Federated Learning
- URL: http://arxiv.org/abs/2310.03461v2
- Date: Sat, 12 Oct 2024 08:33:59 GMT
- Title: Towards Understanding Generalization and Stability Gaps between Centralized and Decentralized Federated Learning
- Authors: Yan Sun, Li Shen, Dacheng Tao,
- Abstract summary: We show that centralized learning always generalizes better than decentralized learning (DFL)
We also conduct experiments on several common setups in FL to validate that our theoretical analysis is consistent with experimental phenomena and contextually valid in several general and practical scenarios.
- Score: 57.35402286842029
- License:
- Abstract: As two mainstream frameworks in federated learning (FL), both centralized and decentralized approaches have shown great application value in practical scenarios. However, existing studies do not provide sufficient evidence and clear guidance for analysis of which performs better in the FL community. Although decentralized methods have been proven to approach the comparable convergence of centralized with less communication, their test performance always falls short of expectations in empirical studies. To comprehensively and fairly compare their efficiency gaps in FL, in this paper, we explore their stability and generalization efficiency. Specifically, we prove that on the general smooth non-convex objectives, 1) centralized FL (CFL) always generalizes better than decentralized FL (DFL); 2) CFL achieves the best performance via adopting partial participation instead of full participation; and, 3) there is a necessary requirement for the topology in DFL to avoid performance collapse as the training scale increases. We also conduct extensive experiments on several common setups in FL to validate that our theoretical analysis is consistent with experimental phenomena and contextually valid in several general and practical scenarios.
Related papers
- OledFL: Unleashing the Potential of Decentralized Federated Learning via Opposite Lookahead Enhancement [21.440625995788974]
Decentralized Federated Learning (DFL) surpasses Federated Learning (CFL) in terms of faster training, privacy preservation, and light communication.
However, DFL still exhibits significant disparities with CFL in terms of generalization ability.
arXiv Detail & Related papers (2024-10-09T02:16:14Z) - Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - Real-World Federated Learning in Radiology: Hurdles to overcome and Benefits to gain [2.8048919658768523]
Federated Learning (FL) enables collaborative model training while keeping data locally.
Currently, most FL studies in radiology are conducted in simulated environments.
Few existing real-world FL initiatives rarely communicate specific measures taken to overcome these hurdles.
arXiv Detail & Related papers (2024-05-15T15:04:27Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Understanding How Consistency Works in Federated Learning via Stage-wise
Relaxed Initialization [84.42306265220274]
Federated learning (FL) is a distributed paradigm that coordinates massive local clients to collaboratively train a global model.
Previous works have implicitly studied that FL suffers from the client-drift'' problem, which is caused by the inconsistent optimum across local clients.
To alleviate the negative impact of the client drift'' and explore its substance in FL, we first design an efficient FL algorithm textitFedInit.
arXiv Detail & Related papers (2023-06-09T06:55:15Z) - FLAGS Framework for Comparative Analysis of Federated Learning
Algorithms [0.0]
This work consolidates the Federated Learning landscape and offers an objective analysis of the major FL algorithms.
To enable a uniform assessment, a multi-FL framework named FLAGS: Federated Learning AlGorithms Simulation has been developed.
Our experiments indicate that fully decentralized FL algorithms achieve comparable accuracy under multiple operating conditions.
arXiv Detail & Related papers (2022-12-14T12:08:30Z) - Decentralized Federated Learning: Fundamentals, State of the Art,
Frameworks, Trends, and Challenges [0.0]
Federated Learning (FL) has gained relevance in training collaborative models without sharing sensitive data.
Decentralized Federated Learning (DFL) emerged to address these concerns by promoting decentralized model aggregation.
This article identifies and analyzes the main fundamentals of DFL in terms of federation architectures, topologies, communication mechanisms, security approaches, and key performance indicators.
arXiv Detail & Related papers (2022-11-15T18:51:20Z) - ISFL: Federated Learning for Non-i.i.d. Data with Local Importance Sampling [17.29669920752378]
We propose importance sampling federated learning (ISFL), an explicit framework with theoretical guarantees.
We derive the convergence theorem of ISFL to involve the effects of local importance sampling.
We employ a water-filling method to calculate the IS weights and develop the ISFL algorithms.
arXiv Detail & Related papers (2022-10-05T09:43:58Z) - UniFed: All-In-One Federated Learning Platform to Unify Open-Source
Frameworks [53.20176108643942]
We present UniFed, the first unified platform for standardizing open-source Federated Learning (FL) frameworks.
UniFed streamlines the end-to-end workflow for distributed experimentation and deployment, encompassing 11 popular open-source FL frameworks.
We evaluate and compare 11 popular FL frameworks from the perspectives of functionality, privacy protection, and performance.
arXiv Detail & Related papers (2022-07-21T05:03:04Z) - Disentangled Federated Learning for Tackling Attributes Skew via
Invariant Aggregation and Diversity Transferring [104.19414150171472]
Attributes skews the current federated learning (FL) frameworks from consistent optimization directions among the clients.
We propose disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches.
Experiments verify that DFL facilitates FL with higher performance, better interpretability, and faster convergence rate, compared with SOTA FL methods.
arXiv Detail & Related papers (2022-06-14T13:12:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.