FedQS: Optimizing Gradient and Model Aggregation for Semi-Asynchronous Federated Learning
- URL: http://arxiv.org/abs/2510.07664v1
- Date: Thu, 09 Oct 2025 01:32:19 GMT
- Title: FedQS: Optimizing Gradient and Model Aggregation for Semi-Asynchronous Federated Learning
- Authors: Yunbo Li, Jiaping Gui, Zhihang Deng, Fanchao Meng, Yue Wu,
- Abstract summary: Federated learning (FL) enables collaborative model training across multiple parties without sharing raw data.<n>This paper presents FedQS, the first framework to theoretically analyze and address these disparities in SAFL.<n>Our work bridges the gap between aggregation strategies in SAFL, offering a unified solution for stable, accurate, and efficient federated learning.
- Score: 8.906501632865908
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) enables collaborative model training across multiple parties without sharing raw data, with semi-asynchronous FL (SAFL) emerging as a balanced approach between synchronous and asynchronous FL. However, SAFL faces significant challenges in optimizing both gradient-based (e.g., FedSGD) and model-based (e.g., FedAvg) aggregation strategies, which exhibit distinct trade-offs in accuracy, convergence speed, and stability. While gradient aggregation achieves faster convergence and higher accuracy, it suffers from pronounced fluctuations, whereas model aggregation offers greater stability but slower convergence and suboptimal accuracy. This paper presents FedQS, the first framework to theoretically analyze and address these disparities in SAFL. FedQS introduces a divide-and-conquer strategy to handle client heterogeneity by classifying clients into four distinct types and adaptively optimizing their local training based on data distribution characteristics and available computational resources. Extensive experiments on computer vision, natural language processing, and real-world tasks demonstrate that FedQS achieves the highest accuracy, attains the lowest loss, and ranks among the fastest in convergence speed, outperforming state-of-the-art baselines. Our work bridges the gap between aggregation strategies in SAFL, offering a unified solution for stable, accurate, and efficient federated learning. The code and datasets are available at https://anonymous.4open.science/r/FedQS-EDD6.
Related papers
- Adaptive Dual-Weighting Framework for Federated Learning via Out-of-Distribution Detection [53.45696787935487]
Federated Learning (FL) enables collaborative model training across large-scale distributed service nodes.<n>In real-world service-oriented deployments, data generated by heterogeneous users, devices, and application scenarios are inherently non-IID.<n>We propose FLood, a novel FL framework inspired by out-of-distribution (OOD) detection.
arXiv Detail & Related papers (2026-02-01T05:54:59Z) - CO-PFL: Contribution-Oriented Personalized Federated Learning for Heterogeneous Networks [51.43780477302533]
Contribution-Oriented PFL (CO-PFL) is a novel algorithm that dynamically estimates each client's contribution for global aggregation.<n>CO-PFL consistently surpasses state-of-the-art methods in robustness in personalization accuracy, robustness, scalability and convergence stability.
arXiv Detail & Related papers (2025-10-23T05:10:06Z) - Asynchronous Federated Learning with non-convex client objective functions and heterogeneous dataset [0.9208007322096533]
Tosampling Federated Learning (FL) enables collaborative model across decentralized devices while preserving stale data privacy.<n>Asynchronous Learning (AFL) addresses these by allowing clients to update independently, improving scalability and reducing delays synchronization.<n>Our framework accommodates variations in data power, distribution, and communication, making it practical for real world applications.
arXiv Detail & Related papers (2025-08-03T09:06:42Z) - Adaptive Deadline and Batch Layered Synchronized Federated Learning [66.93447103966439]
Federated learning (FL) enables collaborative model training across distributed edge devices while preserving data privacy, and typically operates in a round-based synchronous manner.<n>We propose ADEL-FL, a novel framework that jointly optimize per-round deadlines and user-specific batch sizes for layer-wise aggregation.
arXiv Detail & Related papers (2025-05-29T19:59:18Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.<n>We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - FedStaleWeight: Buffered Asynchronous Federated Learning with Fair Aggregation via Staleness Reweighting [9.261784956541641]
Asynchronous Federated Learning (AFL) methods have emerged as promising alternatives to their synchronous counterparts by the slowest agent.
AFL model training heavily towards agents who can produce updates faster, leaving slower agents behind.
We introduce FedStaleWeight, an algorithm addressing in aggregating asynchronous client updates by employing average staleness to compute fair re-weightings.
arXiv Detail & Related papers (2024-06-05T02:52:22Z) - Asynchronous Federated Stochastic Optimization for Heterogeneous Objectives Under Arbitrary Delays [1.9766522384767224]
underlineAsynchunderlineRonous underlineExact underlineAveraging (textscAREA)<n>textscAREA communicates model residuals rather than gradient estimates, reducing exposure to gradient inversion.<n>We are first to obtain rates that scale with the average client update frequency rather than the minimum or maximum, indicating increased robustness to outliers.
arXiv Detail & Related papers (2024-05-16T14:22:49Z) - Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [55.0981921695672]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.<n>It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.<n>It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - Speeding up Heterogeneous Federated Learning with Sequentially Trained
Superclients [19.496278017418113]
Federated Learning (FL) allows training machine learning models in privacy-constrained scenarios by enabling the cooperation of edge devices without requiring local data sharing.
This approach raises several challenges due to the different statistical distribution of the local datasets and the clients' computational heterogeneity.
We propose FedSeq, a novel framework leveraging the sequential training of subgroups of heterogeneous clients, i.e. superclients, to emulate the centralized paradigm in a privacy-compliant way.
arXiv Detail & Related papers (2022-01-26T12:33:23Z) - CSAFL: A Clustered Semi-Asynchronous Federated Learning Framework [14.242716751043533]
Federated learning (FL) is an emerging distributed machine learning paradigm that protects privacy and tackles the problem of isolated data islands.
There are two main communication strategies of FL: synchronous FL and asynchronous FL.
We propose a clustered semi-asynchronous federated learning framework.
arXiv Detail & Related papers (2021-04-16T15:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.