Which mode is better for federated learning? Centralized or
Decentralized
- URL: http://arxiv.org/abs/2310.03461v1
- Date: Thu, 5 Oct 2023 11:09:42 GMT
- Title: Which mode is better for federated learning? Centralized or
Decentralized
- Authors: Yan Sun, Li Shen, Dacheng Tao
- Abstract summary: Both centralized and decentralized approaches have shown excellent performance and great application value in federated learning (FL)
However, current studies do not provide evidence to show which one performs better.
- Score: 64.46017397813549
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Both centralized and decentralized approaches have shown excellent
performance and great application value in federated learning (FL). However,
current studies do not provide sufficient evidence to show which one performs
better. Although from the optimization perspective, decentralized methods can
approach the comparable convergence of centralized methods with less
communication, its test performance has always been inefficient in empirical
studies. To comprehensively explore their behaviors in FL, we study their
excess risks, including the joint analysis of both optimization and
generalization. We prove that on smooth non-convex objectives, 1) centralized
FL (CFL) always generalizes better than decentralized FL (DFL); 2) from
perspectives of the excess risk and test error in CFL, adopting partial
participation is superior to full participation; and, 3) there is a necessary
requirement for the topology in DFL to avoid performance collapse as the
training scale increases. Based on some simple hardware metrics, we could
evaluate which framework is better in practice. Extensive experiments are
conducted on common setups in FL to validate that our theoretical analysis is
contextually valid in practical scenarios.
Related papers
- Real-World Federated Learning in Radiology: Hurdles to overcome and Benefits to gain [2.8048919658768523]
Federated Learning (FL) enables collaborative model training while keeping data locally.
Currently, most FL studies in radiology are conducted in simulated environments.
Few existing real-world FL initiatives rarely communicate specific measures taken to overcome these hurdles.
arXiv Detail & Related papers (2024-05-15T15:04:27Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Understanding How Consistency Works in Federated Learning via Stage-wise
Relaxed Initialization [84.42306265220274]
Federated learning (FL) is a distributed paradigm that coordinates massive local clients to collaboratively train a global model.
Previous works have implicitly studied that FL suffers from the client-drift'' problem, which is caused by the inconsistent optimum across local clients.
To alleviate the negative impact of the client drift'' and explore its substance in FL, we first design an efficient FL algorithm textitFedInit.
arXiv Detail & Related papers (2023-06-09T06:55:15Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - FLAGS Framework for Comparative Analysis of Federated Learning
Algorithms [0.0]
This work consolidates the Federated Learning landscape and offers an objective analysis of the major FL algorithms.
To enable a uniform assessment, a multi-FL framework named FLAGS: Federated Learning AlGorithms Simulation has been developed.
Our experiments indicate that fully decentralized FL algorithms achieve comparable accuracy under multiple operating conditions.
arXiv Detail & Related papers (2022-12-14T12:08:30Z) - Decentralized Federated Learning: Fundamentals, State of the Art,
Frameworks, Trends, and Challenges [0.0]
Federated Learning (FL) has gained relevance in training collaborative models without sharing sensitive data.
Decentralized Federated Learning (DFL) emerged to address these concerns by promoting decentralized model aggregation.
This article identifies and analyzes the main fundamentals of DFL in terms of federation architectures, topologies, communication mechanisms, security approaches, and key performance indicators.
arXiv Detail & Related papers (2022-11-15T18:51:20Z) - ISFL: Federated Learning for Non-i.i.d. Data with Local Importance Sampling [17.29669920752378]
We propose importance sampling federated learning (ISFL), an explicit framework with theoretical guarantees.
We derive the convergence theorem of ISFL to involve the effects of local importance sampling.
We employ a water-filling method to calculate the IS weights and develop the ISFL algorithms.
arXiv Detail & Related papers (2022-10-05T09:43:58Z) - Vertical Semi-Federated Learning for Efficient Online Advertising [50.18284051956359]
Semi-VFL (Vertical Semi-Federated Learning) is proposed to achieve a practical industry application fashion for VFL.
We build an inference-efficient single-party student model applicable to the whole sample space.
New representation distillation methods are designed to extract cross-party feature correlations for both the overlapped and non-overlapped data.
arXiv Detail & Related papers (2022-09-30T17:59:27Z) - DeFL: Decentralized Weight Aggregation for Cross-silo Federated Learning [2.43923223501858]
Federated learning (FL) is an emerging promising paradigm of privacy-preserving machine learning (ML)
We propose DeFL, a novel decentralized weight aggregation framework for cross-silo FL.
DeFL eliminates the central server by aggregating weights on each participating node and weights of only the current training round are maintained and synchronized among all nodes.
arXiv Detail & Related papers (2022-08-01T13:36:49Z) - Disentangled Federated Learning for Tackling Attributes Skew via
Invariant Aggregation and Diversity Transferring [104.19414150171472]
Attributes skews the current federated learning (FL) frameworks from consistent optimization directions among the clients.
We propose disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches.
Experiments verify that DFL facilitates FL with higher performance, better interpretability, and faster convergence rate, compared with SOTA FL methods.
arXiv Detail & Related papers (2022-06-14T13:12:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.