Near-optimal Differentially Private Client Selection in Federated
Settings
- URL: http://arxiv.org/abs/2310.09370v1
- Date: Fri, 13 Oct 2023 19:32:50 GMT
- Title: Near-optimal Differentially Private Client Selection in Federated
Settings
- Authors: Syed Eqbal Alam, Dhirendra Shukla, and Shrisha Rao
- Abstract summary: We consider a federated network wherein clients coordinate with a central server to complete a task.
The clients decide whether to participate or not at a time step based on their preferences.
The developed algorithm provides near-optimal values to the clients over long-term average participation.
- Score: 3.58439716487063
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop an iterative differentially private algorithm for client selection
in federated settings. We consider a federated network wherein clients
coordinate with a central server to complete a task; however, the clients
decide whether to participate or not at a time step based on their preferences
-- local computation and probabilistic intent. The algorithm does not require
client-to-client information exchange. The developed algorithm provides
near-optimal values to the clients over long-term average participation with a
certain differential privacy guarantee. Finally, we present the experimental
results to check the algorithm's efficacy.
Related papers
- Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning [57.38427653043984]
Federated learning (FL) has emerged as a prominent approach for collaborative training of machine learning models across distributed clients.
We introduce FedCAda, an innovative federated client adaptive algorithm designed to tackle this challenge.
We demonstrate that FedCAda outperforms the state-of-the-art methods in terms of adaptability, convergence, stability, and overall performance.
arXiv Detail & Related papers (2024-05-20T06:12:33Z) - Greedy Shapley Client Selection for Communication-Efficient Federated
Learning [32.38170282930876]
Standard client selection algorithms for Federated Learning (FL) are often unbiased and involve uniform random sampling of clients.
We develop a biased client selection strategy, GreedyFed, that identifies and greedily selects the most contributing clients in each communication round.
Compared to various client selection strategies on several real-world datasets, GreedyFed demonstrates fast and stable convergence with high accuracy under timing constraints.
arXiv Detail & Related papers (2023-12-14T16:44:38Z) - Multi-Criteria Client Selection and Scheduling with Fairness Guarantee
for Federated Learning Service [17.986744632466515]
Federated Learning (FL) enables multiple clients to train machine learning models collaboratively without sharing the raw training data.
We propose a multi-criteria client selection and scheduling scheme with a fairness guarantee.
Our scheme can improve the model quality especially when data are non-iid.
arXiv Detail & Related papers (2023-12-05T16:56:24Z) - Provably Personalized and Robust Federated Learning [47.50663360022456]
We propose simple algorithms which identify clusters of similar clients and train a personalized modelper-cluster.
The convergence rates of our algorithmsally match those obtained if we knew the true underlying clustering of the clients and are provably robust in the Byzantine setting.
arXiv Detail & Related papers (2023-06-14T09:37:39Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - Decentralized adaptive clustering of deep nets is beneficial for client
collaboration [0.7012240324005975]
We study the problem of training personalized deep learning models in a decentralized peer-to-peer setting.
Our contribution is an algorithm which for each client finds beneficial collaborations based on a similarity estimate for the local task.
arXiv Detail & Related papers (2022-06-17T15:38:31Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - On the Convergence of Clustered Federated Learning [57.934295064030636]
In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
arXiv Detail & Related papers (2022-02-13T02:39:19Z) - Client Selection Approach in Support of Clustered Federated Learning
over Wireless Edge Networks [2.6774008509840996]
Clustered Federated Multitask Learning (CFL) was introduced as an efficient scheme to obtain reliable specialized models.
This paper proposes a new client selection algorithm that aims to accelerate the convergence rate for obtaining specialized machine learning models.
arXiv Detail & Related papers (2021-08-16T21:38:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.