Sharp Bounds for Sequential Federated Learning on Heterogeneous Data
- URL: http://arxiv.org/abs/2405.01142v1
- Date: Thu, 2 May 2024 09:58:49 GMT
- Title: Sharp Bounds for Sequential Federated Learning on Heterogeneous Data
- Authors: Yipeng Li, Xinchen Lyu,
- Abstract summary: There are two paradigms in Learning (FL): parallel FL (PFL) and sequential FL (SFL)
In contrast to that of PFL, convergence theory SFL data is still lacking.
We derive the upper bounds for strongly convex, general convex and sequential non-counterintuitive objective functions.
We compare the upper bounds SFL with those on heterogeneous PFL data.
- Score: 5.872735527071425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are two paradigms in Federated Learning (FL): parallel FL (PFL), where models are trained in a parallel manner across clients; and sequential FL (SFL), where models are trained in a sequential manner across clients. In contrast to that of PFL, the convergence theory of SFL on heterogeneous data is still lacking. To resolve the theoretical dilemma of SFL, we establish sharp convergence guarantees for SFL on heterogeneous data with both upper and lower bounds. Specifically, we derive the upper bounds for strongly convex, general convex and non-convex objective functions, and construct the matching lower bounds for the strongly convex and general convex objective functions. Then, we compare the upper bounds of SFL with those of PFL, showing that SFL outperforms PFL (at least, when the level of heterogeneity is relatively high). Experimental results on quadratic functions and real data sets validate the counterintuitive comparison result.
Related papers
- FuseFL: One-Shot Federated Learning through the Lens of Causality with Progressive Model Fusion [48.90879664138855]
One-shot Federated Learning (OFL) significantly reduces communication costs in FL by aggregating trained models only once.
However, the performance of advanced OFL methods is far behind the normal FL.
We propose a novel learning approach to endow OFL with superb performance and low communication and storage costs, termed as FuseFL.
arXiv Detail & Related papers (2024-10-27T09:07:10Z) - Sequential Federated Learning in Hierarchical Architecture on Non-IID Datasets [25.010661914466354]
In a real federated learning (FL) system, communication overhead for passing model parameters between the clients and the parameter (PS) is often a bottleneck.
We propose sequential FL (SFL) HFL for the first time, which removes the central PS and enables the model to be completed only through passing data between two adjacent ESs for each server.
arXiv Detail & Related papers (2024-08-19T07:43:35Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Convergence Analysis of Split Federated Learning on Heterogeneous Data [10.61370409320618]
Split learning (SFL) is a recent distributed approach for collaborative model training among multiple clients.
In SFL, a global model is typically split into two parts, where clients train one part in a parallel federated manner, and the other trains the other.
We provide convergence analysis of SFL for strongly convex and general objectives on heterogeneous data.
arXiv Detail & Related papers (2024-02-23T07:59:23Z) - Convergence Analysis of Sequential Federated Learning on Heterogeneous Data [5.872735527071425]
There are two categories of methods in Federated Learning (FL) for joint training across multiple clients: i) parallel FL (PFL), where clients train models in a parallel manner; and ii) FL (SFL) where clients train in a sequential manner.
In this paper, we establish the convergence guarantees SFL on heterogeneous data is still lacking.
Experimental results validate the counterintuitive analysis result that SFL outperforms PFL on extremely heterogeneous data in cross-device settings.
arXiv Detail & Related papers (2023-11-06T14:48:51Z) - DFedADMM: Dual Constraints Controlled Model Inconsistency for
Decentralized Federated Learning [52.83811558753284]
Decentralized learning (DFL) discards the central server and establishes a decentralized communication network.
Existing DFL methods still suffer from two major challenges: local inconsistency and local overfitting.
arXiv Detail & Related papers (2023-08-16T11:22:36Z) - Improving the Model Consistency of Decentralized Federated Learning [68.2795379609854]
Federated Learning (FL) discards the central server and each client only communicates with its neighbors in a decentralized communication network.
Existing DFL suffers from inconsistency among local clients, which results in inferior compared to FLFL.
We propose DFedSAMMGS, where $1lambda$ is the spectral gossip matrix and $Q$ is the number of sparse data gaps.
arXiv Detail & Related papers (2023-02-08T14:37:34Z) - pFL-Bench: A Comprehensive Benchmark for Personalized Federated Learning [42.819532536636835]
We propose the first comprehensive pFL benchmark, pFL-Bench, for rapid, reproducible, standardized and thorough pFL evaluation.
The proposed benchmark contains more than 10 datasets in diverse application domains with unified data partition and realistic heterogeneous settings.
We highlight the benefits and potential of state-of-the-art pFL methods and hope pFL-Bench enables further pFL research and broad applications.
arXiv Detail & Related papers (2022-06-08T02:51:59Z) - Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm [140.25480610981504]
A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
arXiv Detail & Related papers (2022-03-19T13:55:47Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.