Client Selection and Bandwidth Allocation in Wireless Federated Learning
Networks: A Long-Term Perspective
- URL: http://arxiv.org/abs/2004.04314v1
- Date: Thu, 9 Apr 2020 01:06:41 GMT
- Title: Client Selection and Bandwidth Allocation in Wireless Federated Learning
Networks: A Long-Term Perspective
- Authors: Jie Xu, Heqiang Wang
- Abstract summary: This paper studies federated learning (FL) in a classic wireless network, where learning clients share a common wireless link to a coordinating server to perform federated model training using their local data.
In such wireless federated learning networks (WFLNs), optimizing the learning performance crucially depends on how clients are selected and how bandwidth is allocated among the selected clients in every learning round, as both radio and client energy resources are limited.
- Score: 8.325089307976654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies federated learning (FL) in a classic wireless network,
where learning clients share a common wireless link to a coordinating server to
perform federated model training using their local data. In such wireless
federated learning networks (WFLNs), optimizing the learning performance
depends crucially on how clients are selected and how bandwidth is allocated
among the selected clients in every learning round, as both radio and client
energy resources are limited. While existing works have made some attempts to
allocate the limited wireless resources to optimize FL, they focus on the
problem in individual learning rounds, overlooking an inherent yet critical
feature of federated learning. This paper brings a new long-term perspective to
resource allocation in WFLNs, realizing that learning rounds are not only
temporally interdependent but also have varying significance towards the final
learning outcome. To this end, we first design data-driven experiments to show
that different temporal client selection patterns lead to considerably
different learning performance. With the obtained insights, we formulate a
stochastic optimization problem for joint client selection and bandwidth
allocation under long-term client energy constraints, and develop a new
algorithm that utilizes only currently available wireless channel information
but can achieve long-term performance guarantee. Further experiments show that
our algorithm results in the desired temporal client selection pattern, is
adaptive to changing network environments and far outperforms benchmarks that
ignore the long-term effect of FL.
Related papers
- Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - DynamicFL: Balancing Communication Dynamics and Client Manipulation for
Federated Learning [6.9138560535971605]
Federated Learning (FL) aims to train a global model by exploiting the decentralized data across millions of edge devices.
Given the geo-distributed edge devices with highly dynamic networks in the wild, aggregating all the model updates from those participating devices will result in inevitable long-tail delays in FL.
We propose a novel FL framework, DynamicFL, by considering the communication dynamics and data quality across massive edge devices with a specially designed client manipulation strategy.
arXiv Detail & Related papers (2023-07-16T19:09:31Z) - Joint Age-based Client Selection and Resource Allocation for
Communication-Efficient Federated Learning over NOMA Networks [8.030674576024952]
In federated learning (FL), distributed clients can collaboratively train a shared global model while retaining their own training data locally.
In this paper, a joint optimization problem of client selection and resource allocation is formulated, aiming to minimize the total time consumption of each round in FL over a non-orthogonal multiple access (NOMA) enabled wireless network.
In addition, a server-side artificial neural network (ANN) is proposed to predict the FL models of clients who are not selected at each round to further improve FL performance.
arXiv Detail & Related papers (2023-04-18T13:58:16Z) - Client Selection for Generalization in Accelerated Federated Learning: A
Multi-Armed Bandit Approach [20.300740276237523]
Federated learning (FL) is an emerging machine learning (ML) paradigm used to train models across multiple nodes (i.e., clients) holding local data sets.
We develop a novel algorithm to achieve this goal, dubbed Bandit Scheduling for FL (BSFL)
arXiv Detail & Related papers (2023-03-18T09:45:58Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Addressing Client Drift in Federated Continual Learning with Adaptive
Optimization [10.303676184878896]
We outline a framework for performing Federated Continual Learning (FCL) by using NetTailor as a candidate continual learning approach.
We show that adaptive federated optimization can reduce the adverse impact of client drift and showcase its effectiveness on CIFAR100, MiniImagenet, and Decathlon benchmarks.
arXiv Detail & Related papers (2022-03-24T20:00:03Z) - Context-Aware Online Client Selection for Hierarchical Federated
Learning [33.205640790962505]
Federated Learning (FL) has been considered as an appealing framework to tackle data privacy issues.
Federated Learning (FL) has been considered as an appealing framework to tackle data privacy issues.
arXiv Detail & Related papers (2021-12-02T01:47:01Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Federated Continual Learning with Weighted Inter-client Transfer [79.93004004545736]
We propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT)
FedWeIT decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients.
We validate our FedWeIT against existing federated learning and continual learning methods, and our model significantly outperforms them with a large reduction in the communication cost.
arXiv Detail & Related papers (2020-03-06T13:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.