Client Selection Approach in Support of Clustered Federated Learning
over Wireless Edge Networks
- URL: http://arxiv.org/abs/2108.08768v1
- Date: Mon, 16 Aug 2021 21:38:22 GMT
- Title: Client Selection Approach in Support of Clustered Federated Learning
over Wireless Edge Networks
- Authors: Abdullatif Albaseer, Mohamed Abdallah, Ala Al-Fuqaha, and Aiman Erbad
- Abstract summary: Clustered Federated Multitask Learning (CFL) was introduced as an efficient scheme to obtain reliable specialized models.
This paper proposes a new client selection algorithm that aims to accelerate the convergence rate for obtaining specialized machine learning models.
- Score: 2.6774008509840996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Clustered Federated Multitask Learning (CFL) was introduced as an efficient
scheme to obtain reliable specialized models when data is imbalanced and
distributed in a non-i.i.d. (non-independent and identically distributed)
fashion amongst clients. While a similarity measure metric, like the cosine
similarity, can be used to endow groups of the client with a specialized model,
this process can be arduous as the server should involve all clients in each of
the federated learning rounds. Therefore, it is imperative that a subset of
clients is selected periodically due to the limited bandwidth and latency
constraints at the network edge. To this end, this paper proposes a new client
selection algorithm that aims to accelerate the convergence rate for obtaining
specialized machine learning models that achieve high test accuracies for all
client groups. Specifically, we introduce a client selection approach that
leverages the devices' heterogeneity to schedule the clients based on their
round latency and exploits the bandwidth reuse for clients that consume more
time to update the model. Then, the server performs model averaging and
clusters the clients based on predefined thresholds. When a specific cluster
reaches a stationary point, the proposed algorithm uses a greedy scheduling
algorithm for that group by selecting the clients with less latency to update
the model. Extensive experiments show that the proposed approach lowers the
training time and accelerates the convergence rate by up to 50% while imbuing
each client with a specialized model that is fit for its local data
distribution.
Related papers
- Towards Client Driven Federated Learning [7.528642177161784]
We introduce Client-Driven Federated Learning (CDFL), a novel FL framework that puts clients at the driving role.
In CDFL, each client independently and asynchronously updates its model by uploading the locally trained model to the server and receiving a customized model tailored to its local task.
arXiv Detail & Related papers (2024-05-24T10:17:49Z) - Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Greedy Shapley Client Selection for Communication-Efficient Federated
Learning [32.38170282930876]
Standard client selection algorithms for Federated Learning (FL) are often unbiased and involve uniform random sampling of clients.
We develop a biased client selection strategy, GreedyFed, that identifies and greedily selects the most contributing clients in each communication round.
Compared to various client selection strategies on several real-world datasets, GreedyFed demonstrates fast and stable convergence with high accuracy under timing constraints.
arXiv Detail & Related papers (2023-12-14T16:44:38Z) - Multi-Criteria Client Selection and Scheduling with Fairness Guarantee
for Federated Learning Service [17.986744632466515]
Federated Learning (FL) enables multiple clients to train machine learning models collaboratively without sharing the raw training data.
We propose a multi-criteria client selection and scheduling scheme with a fairness guarantee.
Our scheme can improve the model quality especially when data are non-iid.
arXiv Detail & Related papers (2023-12-05T16:56:24Z) - Timely Asynchronous Hierarchical Federated Learning: Age of Convergence [59.96266198512243]
We consider an asynchronous hierarchical federated learning setting with a client-edge-cloud framework.
The clients exchange the trained parameters with their corresponding edge servers, which update the locally aggregated model.
The goal of each client is to converge to the global model, while maintaining timeliness of the clients.
arXiv Detail & Related papers (2023-06-21T17:39:16Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Efficient Distribution Similarity Identification in Clustered Federated
Learning via Principal Angles Between Client Data Subspaces [59.33965805898736]
Clustered learning has been shown to produce promising results by grouping clients into clusters.
Existing FL algorithms are essentially trying to group clients together with similar distributions.
Prior FL algorithms attempt similarities indirectly during training.
arXiv Detail & Related papers (2022-09-21T17:37:54Z) - Clustered Scheduling and Communication Pipelining For Efficient Resource
Management Of Wireless Federated Learning [6.753282396352072]
This paper proposes using communication pipelining to enhance the wireless spectrum utilization efficiency and convergence speed of federated learning.
We provide a generic formulation for optimal client clustering under different settings, and we analytically derive an efficient algorithm for obtaining the optimal solution.
arXiv Detail & Related papers (2022-06-15T16:23:19Z) - On the Convergence of Clustered Federated Learning [57.934295064030636]
In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
arXiv Detail & Related papers (2022-02-13T02:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.