Which mode is better for federated learning? Centralized or
Decentralized
- URL: http://arxiv.org/abs/2310.03461v1
- Date: Thu, 5 Oct 2023 11:09:42 GMT
- Title: Which mode is better for federated learning? Centralized or
Decentralized
- Authors: Yan Sun, Li Shen, Dacheng Tao
- Abstract summary: Both centralized and decentralized approaches have shown excellent performance and great application value in federated learning (FL)
However, current studies do not provide evidence to show which one performs better.
- Score: 64.46017397813549
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Both centralized and decentralized approaches have shown excellent
performance and great application value in federated learning (FL). However,
current studies do not provide sufficient evidence to show which one performs
better. Although from the optimization perspective, decentralized methods can
approach the comparable convergence of centralized methods with less
communication, its test performance has always been inefficient in empirical
studies. To comprehensively explore their behaviors in FL, we study their
excess risks, including the joint analysis of both optimization and
generalization. We prove that on smooth non-convex objectives, 1) centralized
FL (CFL) always generalizes better than decentralized FL (DFL); 2) from
perspectives of the excess risk and test error in CFL, adopting partial
participation is superior to full participation; and, 3) there is a necessary
requirement for the topology in DFL to avoid performance collapse as the
training scale increases. Based on some simple hardware metrics, we could
evaluate which framework is better in practice. Extensive experiments are
conducted on common setups in FL to validate that our theoretical analysis is
contextually valid in practical scenarios.
Related papers
- From Centralized to Decentralized Federated Learning: Theoretical Insights, Privacy Preservation, and Robustness Challenges [6.8109977763829885]
Federated Learning (FL) enables collaborative learning without directly sharing individual's raw data.
FL can be implemented in either a centralized (server-based) or decentralized (peer-to-peer) manner.
arXiv Detail & Related papers (2025-03-10T16:27:40Z) - OledFL: Unleashing the Potential of Decentralized Federated Learning via Opposite Lookahead Enhancement [21.440625995788974]
Decentralized Federated Learning (DFL) surpasses Federated Learning (CFL) in terms of faster training, privacy preservation, and light communication.
However, DFL still exhibits significant disparities with CFL in terms of generalization ability.
arXiv Detail & Related papers (2024-10-09T02:16:14Z) - Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - Real-World Federated Learning in Radiology: Hurdles to overcome and Benefits to gain [2.8048919658768523]
Federated Learning (FL) enables collaborative model training while keeping data locally.
Currently, most FL studies in radiology are conducted in simulated environments.
Few existing real-world FL initiatives rarely communicate specific measures taken to overcome these hurdles.
arXiv Detail & Related papers (2024-05-15T15:04:27Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Understanding How Consistency Works in Federated Learning via Stage-wise
Relaxed Initialization [84.42306265220274]
Federated learning (FL) is a distributed paradigm that coordinates massive local clients to collaboratively train a global model.
Previous works have implicitly studied that FL suffers from the client-drift'' problem, which is caused by the inconsistent optimum across local clients.
To alleviate the negative impact of the client drift'' and explore its substance in FL, we first design an efficient FL algorithm textitFedInit.
arXiv Detail & Related papers (2023-06-09T06:55:15Z) - FLAGS Framework for Comparative Analysis of Federated Learning
Algorithms [0.0]
This work consolidates the Federated Learning landscape and offers an objective analysis of the major FL algorithms.
To enable a uniform assessment, a multi-FL framework named FLAGS: Federated Learning AlGorithms Simulation has been developed.
Our experiments indicate that fully decentralized FL algorithms achieve comparable accuracy under multiple operating conditions.
arXiv Detail & Related papers (2022-12-14T12:08:30Z) - Decentralized Federated Learning: Fundamentals, State of the Art,
Frameworks, Trends, and Challenges [0.0]
Federated Learning (FL) has gained relevance in training collaborative models without sharing sensitive data.
Decentralized Federated Learning (DFL) emerged to address these concerns by promoting decentralized model aggregation.
This article identifies and analyzes the main fundamentals of DFL in terms of federation architectures, topologies, communication mechanisms, security approaches, and key performance indicators.
arXiv Detail & Related papers (2022-11-15T18:51:20Z) - ISFL: Federated Learning for Non-i.i.d. Data with Local Importance Sampling [17.29669920752378]
We propose importance sampling federated learning (ISFL), an explicit framework with theoretical guarantees.
We derive the convergence theorem of ISFL to involve the effects of local importance sampling.
We employ a water-filling method to calculate the IS weights and develop the ISFL algorithms.
arXiv Detail & Related papers (2022-10-05T09:43:58Z) - UniFed: All-In-One Federated Learning Platform to Unify Open-Source
Frameworks [53.20176108643942]
We present UniFed, the first unified platform for standardizing open-source Federated Learning (FL) frameworks.
UniFed streamlines the end-to-end workflow for distributed experimentation and deployment, encompassing 11 popular open-source FL frameworks.
We evaluate and compare 11 popular FL frameworks from the perspectives of functionality, privacy protection, and performance.
arXiv Detail & Related papers (2022-07-21T05:03:04Z) - Disentangled Federated Learning for Tackling Attributes Skew via
Invariant Aggregation and Diversity Transferring [104.19414150171472]
Attributes skews the current federated learning (FL) frameworks from consistent optimization directions among the clients.
We propose disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches.
Experiments verify that DFL facilitates FL with higher performance, better interpretability, and faster convergence rate, compared with SOTA FL methods.
arXiv Detail & Related papers (2022-06-14T13:12:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.