FilFL: Client Filtering for Optimized Client Participation in Federated
Learning
- URL: http://arxiv.org/abs/2302.06599v2
- Date: Mon, 5 Jun 2023 17:58:24 GMT
- Title: FilFL: Client Filtering for Optimized Client Participation in Federated
Learning
- Authors: Fares Fourati, Salma Kharrat, Vaneet Aggarwal, Mohamed-Slim Alouini,
Marco Canini
- Abstract summary: We propose FilFL, a new approach to optimize client participation and training by introducing client filtering.
FilFL periodically filters the available clients to identify a subset that maximizes an objective function using an efficient greedy filtering algorithm.
Our empirical results demonstrate several benefits of our approach, including improved learning efficiency, faster convergence, and up to 10 percentage points higher test accuracy compared to scenarios where client filtering is not utilized.
- Score: 95.27347185031265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is an emerging machine learning paradigm that enables
clients to train collaboratively without exchanging local data. The clients
participating in the training process have a crucial impact on the convergence
rate, learning efficiency, and model generalization. In this work, we propose
FilFL, a new approach to optimizing client participation and training by
introducing client filtering. FilFL periodically filters the available clients
to identify a subset that maximizes a combinatorial objective function using an
efficient greedy filtering algorithm. From this filtered-in subset, clients are
then selected for the training process. We provide a thorough analysis of FilFL
convergence in a heterogeneous setting and evaluate its performance across
diverse vision and language tasks and realistic federated scenarios with
time-varying client availability. Our empirical results demonstrate several
benefits of our approach, including improved learning efficiency, faster
convergence, and up to 10 percentage points higher test accuracy compared to
scenarios where client filtering is not utilized.
Related papers
- Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.
We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - When to Trust Aggregated Gradients: Addressing Negative Client Sampling
in Federated Learning [41.51682329500003]
We propose a novel learning rate adaptation mechanism to adjust the server learning rate for the aggregated gradient in each round.
We make theoretical deductions to find a meaningful and robust indicator that is positively related to the optimal server learning rate.
arXiv Detail & Related papers (2023-01-25T03:52:45Z) - Clustered Scheduling and Communication Pipelining For Efficient Resource
Management Of Wireless Federated Learning [6.753282396352072]
This paper proposes using communication pipelining to enhance the wireless spectrum utilization efficiency and convergence speed of federated learning.
We provide a generic formulation for optimal client clustering under different settings, and we analytically derive an efficient algorithm for obtaining the optimal solution.
arXiv Detail & Related papers (2022-06-15T16:23:19Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - To Federate or Not To Federate: Incentivizing Client Participation in
Federated Learning [22.3101738137465]
Federated learning (FL) facilitates collaboration between a group of clients who seek to train a common machine learning model.
In this paper, we propose an algorithm called IncFL that explicitly maximizes the fraction of clients who are incentivized to use the global model.
arXiv Detail & Related papers (2022-05-30T04:03:31Z) - On the Convergence of Clustered Federated Learning [57.934295064030636]
In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
arXiv Detail & Related papers (2022-02-13T02:39:19Z) - QuPeL: Quantized Personalization with Applications to Federated Learning [8.420943739336067]
In this work, we introduce a textitquantized and textitpersonalized FL algorithm QuPeL that facilitates collective training with heterogeneous clients.
For personalization, we allow clients to learn textitcompressed personalized models with different quantization parameters depending on their resources.
Numerically, we show that optimizing over the quantization levels increases the performance and we validate that QuPeL outperforms both FedAvg and local training of clients in a heterogeneous setting.
arXiv Detail & Related papers (2021-02-23T16:43:51Z) - Client Adaptation improves Federated Learning with Simulated Non-IID
Clients [1.0896567381206714]
We present a federated learning approach for learning a client adaptable, robust model when data is non-identically and non-independently distributed (non-IID) across clients.
We show that adding learned client-specific conditioning improves model performance, and the approach is shown to work on balanced and imbalanced data set from both audio and image domains.
arXiv Detail & Related papers (2020-07-09T13:48:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.