FedPSA: Modeling Behavioral Staleness in Asynchronous Federated Learning
- URL: http://arxiv.org/abs/2602.15337v2
- Date: Fri, 20 Feb 2026 11:17:32 GMT
- Title: FedPSA: Modeling Behavioral Staleness in Asynchronous Federated Learning
- Authors: Chaoyi Lu, Yiding Sun, Zhichuan Yang, Jinqian Chen, Dongfu Yin, Jihua Zhu,
- Abstract summary: Asynchronous Learning (AFL) has emerged as a significant research area in recent years.<n>Due to the staleness introduced by the asynchronous process, its performance may degrade in some scenarios.<n>Existing methods often use the round difference between the current model and the global model as the sole measure of staleness.
- Score: 19.01943754722055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Asynchronous Federated Learning (AFL) has emerged as a significant research area in recent years. By not waiting for slower clients and executing the training process concurrently, it achieves faster training speed compared to traditional federated learning. However, due to the staleness introduced by the asynchronous process, its performance may degrade in some scenarios. Existing methods often use the round difference between the current model and the global model as the sole measure of staleness, which is coarse-grained and lacks observation of the model itself, thereby limiting the performance ceiling of asynchronous methods. In this paper, we propose FedPSA (Parameter Sensitivity-based Asynchronous Federated Learning), a more fine-grained AFL framework that leverages parameter sensitivity to measure model obsolescence and establishes a dynamic momentum queue to assess the current training phase in real time, thereby adjusting the tolerance for outdated information dynamically. Extensive experiments on multiple datasets and comparisons with various methods demonstrate the superior performance of FedPSA, achieving up to 6.37\% improvement over baseline methods and 1.93\% over the current state-of-the-art method.
Related papers
- End-to-End Training for Autoregressive Video Diffusion via Self-Resampling [63.84672807009907]
Autoregressive video diffusion models hold promise for world simulation but are vulnerable to exposure bias arising from the train-test mismatch.<n>We introduce Resampling Forcing, a teacher-free framework that enables training autoregressive video models from scratch and at scale.
arXiv Detail & Related papers (2025-12-17T18:53:29Z) - Efficient Federated Learning with Timely Update Dissemination [54.668309196009204]
Federated Learning (FL) has emerged as a compelling methodology for the management of distributed data.<n>We propose an efficient FL approach that capitalizes on additional downlink bandwidth resources to ensure timely update dissemination.
arXiv Detail & Related papers (2025-07-08T14:34:32Z) - Corrected with the Latest Version: Make Robust Asynchronous Federated Learning Possible [2.663489028501814]
This paper proposes an asynchronous federated learning version correction algorithm based on knowledge distillation, named FedADT.<n>FedADT applies knowledge distillation before aggregating gradients, using the latest global model to correct outdated information, thus effectively reducing the negative impact of outdated gradients on the training process.<n>We conducted experimental comparisons with several classical algorithms, and the results demonstrate that FedADT achieves significant improvements over other asynchronous methods and outperforms all methods in terms of convergence speed.
arXiv Detail & Related papers (2025-04-05T06:54:13Z) - SEAFL: Enhancing Efficiency in Semi-Asynchronous Federated Learning through Adaptive Aggregation and Selective Training [26.478852701376294]
We present em SEAFL, a novel FL framework designed to mitigate both the straggler and the stale model challenges in semi-asynchronous FL.<n>em SEAFL dynamically assigns weights to uploaded models during aggregation based on their staleness and importance to the current global model.<n>We evaluate the effectiveness of em SEAFL through extensive experiments on three benchmark datasets.
arXiv Detail & Related papers (2025-02-22T05:13:53Z) - FedFa: A Fully Asynchronous Training Paradigm for Federated Learning [14.4313600357833]
Federated learning is an efficient decentralized training paradigm for scaling the machine learning model training on a large number of devices.
Recent state-of-the-art solutions propose using semi-asynchronous approaches to mitigate the waiting time cost with guaranteed convergence.
We propose a full asynchronous training paradigm, called FedFa, which can guarantee model convergence and eliminate the waiting time completely.
arXiv Detail & Related papers (2024-04-17T02:46:59Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Adaptive Training Meets Progressive Scaling: Elevating Efficiency in Diffusion Models [52.1809084559048]
We propose a novel two-stage divide-and-conquer training strategy termed TDC Training.<n>It groups timesteps based on task similarity and difficulty, assigning highly customized denoising models to each group, thereby enhancing the performance of diffusion models.<n>While two-stage training avoids the need to train each model separately, the total training cost is even lower than training a single unified denoising model.
arXiv Detail & Related papers (2023-12-20T03:32:58Z) - Knowledge Rumination for Client Utility Evaluation in Heterogeneous Federated Learning [12.50871784200551]
Federated Learning (FL) allows several clients to cooperatively train machine learning models without disclosing the raw data.<n>Non-IID data and stale models pose significant challenges to AFL, as they can diminish the practicality of the global model and even lead to training failures.<n>We propose a novel AFL framework called Federated Historical Learning (FedHist), which effectively addresses the challenges posed by both Non-IID data and gradient staleness.
arXiv Detail & Related papers (2023-12-16T11:40:49Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - AsyncFedED: Asynchronous Federated Learning with Euclidean Distance
based Adaptive Weight Aggregation [17.57059932879715]
In an asynchronous learning framework, a server updates the global model once it receives an update from a client instead of waiting for all the updates to arrive as in the setting.
A proposed adaptive weight aggregation algorithm, referred to as AsyncFedED, is presented.
arXiv Detail & Related papers (2022-05-27T07:18:11Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - Training Generative Adversarial Networks by Solving Ordinary
Differential Equations [54.23691425062034]
We study the continuous-time dynamics induced by GAN training.
From this perspective, we hypothesise that instabilities in training GANs arise from the integration error.
We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training.
arXiv Detail & Related papers (2020-10-28T15:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.