FilFL: Client Filtering for Optimized Client Participation in Federated
Learning
- URL: http://arxiv.org/abs/2302.06599v2
- Date: Mon, 5 Jun 2023 17:58:24 GMT
- Title: FilFL: Client Filtering for Optimized Client Participation in Federated
Learning
- Authors: Fares Fourati, Salma Kharrat, Vaneet Aggarwal, Mohamed-Slim Alouini,
Marco Canini
- Abstract summary: We propose FilFL, a new approach to optimize client participation and training by introducing client filtering.
FilFL periodically filters the available clients to identify a subset that maximizes an objective function using an efficient greedy filtering algorithm.
Our empirical results demonstrate several benefits of our approach, including improved learning efficiency, faster convergence, and up to 10 percentage points higher test accuracy compared to scenarios where client filtering is not utilized.
- Score: 95.27347185031265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is an emerging machine learning paradigm that enables
clients to train collaboratively without exchanging local data. The clients
participating in the training process have a crucial impact on the convergence
rate, learning efficiency, and model generalization. In this work, we propose
FilFL, a new approach to optimizing client participation and training by
introducing client filtering. FilFL periodically filters the available clients
to identify a subset that maximizes a combinatorial objective function using an
efficient greedy filtering algorithm. From this filtered-in subset, clients are
then selected for the training process. We provide a thorough analysis of FilFL
convergence in a heterogeneous setting and evaluate its performance across
diverse vision and language tasks and realistic federated scenarios with
time-varying client availability. Our empirical results demonstrate several
benefits of our approach, including improved learning efficiency, faster
convergence, and up to 10 percentage points higher test accuracy compared to
scenarios where client filtering is not utilized.
Related papers
- Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - GPFL: A Gradient Projection-Based Client Selection Framework for Efficient Federated Learning [6.717563725609496]
Federated learning client selection is crucial for determining participant clients.
We propose GPFL, which measures client value by comparing local and global descent directions.
GPFL exhibits shorter computation times through pre-selection and parameter reuse in federated learning.
arXiv Detail & Related papers (2024-03-26T16:14:43Z) - Accelerating Non-IID Federated Learning via Heterogeneity-Guided Client
Sampling [17.56259695496955]
HiCS-FL is a novel client selection method in which the server estimates statistical heterogeneity of a client's data using the client's update of the network's output layer.
In non-IID settings HiCS-FL achieves faster convergence and lower training variance than state-of-the-art FL client selection schemes.
arXiv Detail & Related papers (2023-09-30T00:29:30Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - When to Trust Aggregated Gradients: Addressing Negative Client Sampling
in Federated Learning [41.51682329500003]
We propose a novel learning rate adaptation mechanism to adjust the server learning rate for the aggregated gradient in each round.
We make theoretical deductions to find a meaningful and robust indicator that is positively related to the optimal server learning rate.
arXiv Detail & Related papers (2023-01-25T03:52:45Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Clustered Scheduling and Communication Pipelining For Efficient Resource
Management Of Wireless Federated Learning [6.753282396352072]
This paper proposes using communication pipelining to enhance the wireless spectrum utilization efficiency and convergence speed of federated learning.
We provide a generic formulation for optimal client clustering under different settings, and we analytically derive an efficient algorithm for obtaining the optimal solution.
arXiv Detail & Related papers (2022-06-15T16:23:19Z) - On the Convergence of Clustered Federated Learning [57.934295064030636]
In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
arXiv Detail & Related papers (2022-02-13T02:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.