MAB-Based Channel Scheduling for Asynchronous Federated Learning in Non-Stationary Environments
- URL: http://arxiv.org/abs/2503.01324v2
- Date: Sun, 23 Mar 2025 06:54:42 GMT
- Title: MAB-Based Channel Scheduling for Asynchronous Federated Learning in Non-Stationary Environments
- Authors: Zhiyin Li, Yubo Yang, Tao Yang, Ziyu Guo, Xiaofeng Wu, Bo Hu,
- Abstract summary: Federated learning enables distributed model training across clients without raw data exchange.<n>In wireless implementations, frequent parameter updates cause high communication overhead.<n>We propose an asynchronous federated learning scheduling framework to reduce client staleness while enhancing communication efficiency and fairness.
- Score: 12.404264058659429
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning enables distributed model training across clients without raw data exchange, but in wireless implementations, frequent parameter updates cause high communication overhead. Existing research often assumes known channel state information (CSI) or stationary channels, though practical wireless channels are non-stationary due to fading, user mobility, and attacks, leading to unpredictable transmission failures and exacerbating client staleness, which hampers model convergence. To tackle these challenges, we propose an asynchronous federated learning scheduling framework for non-stationary channels that aims to reduce client staleness while enhancing communication efficiency and fairness. Our framework considers two scenarios: extremely non-stationary and piecewise-stationary channels. Age of Information (AoI) quantifies client staleness under these conditions. We conduct convergence analysis to examine the impact of AoI and per-round client participation on learning performance and formulate the scheduling problem as a multi-armed bandit (MAB) problem. We derive theoretical lower bounds on AoI regret and develop scheduling strategies based on GLR-CUCB and M-exp3 algorithms, including upper bounds on AoI regret. To address imbalanced client updates, we propose an adaptive matching strategy that incorporates marginal utility and fairness considerations. Simulation results show that our algorithm achieves sub-linear AoI regret, accelerates convergence, and promotes fairer aggregation.
Related papers
- Asynchronous Federated Learning with non-convex client objective functions and heterogeneous dataset [0.9208007322096533]
Tosampling Federated Learning (FL) enables collaborative model across decentralized devices while preserving stale data privacy.<n>Asynchronous Learning (AFL) addresses these by allowing clients to update independently, improving scalability and reducing delays synchronization.<n>Our framework accommodates variations in data power, distribution, and communication, making it practical for real world applications.
arXiv Detail & Related papers (2025-08-03T09:06:42Z) - Stratify: Rethinking Federated Learning for Non-IID Data through Balanced Sampling [9.774529150331297]
Stratify is a novel FL framework designed to systematically manage class and feature distributions throughout training.
Inspired by classical stratified sampling, our approach employs a Stratified Label Schedule (SLS) to ensure balanced exposure across labels.
To uphold privacy, we implement a secure client selection protocol leveraging homomorphic encryption.
arXiv Detail & Related papers (2025-04-18T04:44:41Z) - Exact and Linear Convergence for Federated Learning under Arbitrary Client Participation is Attainable [9.870718388000645]
This work tackles the fundamental challenges in Federated Learning (FL)<n>It is well-established that popular FedAvg-style algorithms struggle with exact convergence.<n>We present FOCUS, Federated Optimization with Exact Convergence via Push-pull Strategy, a provably convergent algorithm.
arXiv Detail & Related papers (2025-03-25T23:54:23Z) - FedRTS: Federated Robust Pruning via Combinatorial Thompson Sampling [12.067872131025231]
Federated Learning (FL) enables collaborative model training across distributed clients without data sharing.<n>Current methods use dynamic pruning to improve efficiency by periodically adjusting sparse model topologies while maintaining sparsity.<n>We propose Federated Robust pruning via Thompson Sampling (FedRTS), a novel framework designed to develop robust sparse models.
arXiv Detail & Related papers (2025-01-31T13:26:22Z) - Asynchronous Federated Learning: A Scalable Approach for Decentralized Machine Learning [0.9208007322096533]
Federated Learning (FL) has emerged as a powerful paradigm for decentralized machine learning, enabling collaborative model training across diverse clients without sharing raw data.<n>Traditional FL approaches often face limitations in scalability and efficiency due to their reliance on synchronous client updates.<n>We propose an Asynchronous Federated Learning (AFL) algorithm, which allows clients to update the global model independently and asynchronously.
arXiv Detail & Related papers (2024-12-23T17:11:02Z) - Asynchronous Federated Stochastic Optimization for Heterogeneous Objectives Under Arbitrary Delays [0.0]
Federated learning (FL) was recently proposed to securely train models with data held over multiple locations ("clients")
Two major challenges hindering the performance of FL algorithms are long training times caused by straggling clients, and a decline in model accuracy under non-iid local data distributions ("client drift")
We propose and analyze Asynchronous Exact Averaging (AREA), a new (sub)gradient algorithm that utilizes communication to speed up convergence and enhance scalability, and employs client memory to correct the client drift caused by variations in client update frequencies.
arXiv Detail & Related papers (2024-05-16T14:22:49Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Magnitude Matters: Fixing SIGNSGD Through Magnitude-Aware Sparsification
in the Presence of Data Heterogeneity [60.791736094073]
Communication overhead has become one of the major bottlenecks in the distributed training of deep neural networks.
We propose a magnitude-driven sparsification scheme, which addresses the non-convergence issue of SIGNSGD.
The proposed scheme is validated through experiments on Fashion-MNIST, CIFAR-10, and CIFAR-100 datasets.
arXiv Detail & Related papers (2023-02-19T17:42:35Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - Byzantine-robust Federated Learning through Spatial-temporal Analysis of
Local Model Updates [6.758334200305236]
Federated Learning (FL) enables multiple distributed clients (e.g., mobile devices) to collaboratively train a centralized model while keeping the training data locally on the client.
In this paper, we propose to mitigate these failures and attacks from a spatial-temporal perspective.
Specifically, we use a clustering-based method to detect and exclude incorrect updates by leveraging their geometric properties in the parameter space.
arXiv Detail & Related papers (2021-07-03T18:48:11Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Multi-Armed Bandit Based Client Scheduling for Federated Learning [91.91224642616882]
federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy.
In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.
This work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients.
arXiv Detail & Related papers (2020-07-05T12:32:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.