On the Convergence of Clustered Federated Learning
- URL: http://arxiv.org/abs/2202.06187v1
- Date: Sun, 13 Feb 2022 02:39:19 GMT
- Title: On the Convergence of Clustered Federated Learning
- Authors: Jie MA, Guodong Long, Tianyi Zhou, Jing Jiang, Chengqi Zhang
- Abstract summary: In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
- Score: 57.934295064030636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a federated learning system, the clients, e.g. mobile devices and
organization participants, usually have different personal preferences or
behavior patterns, namely Non-IID data problems across clients. Clustered
federated learning is to group users into different clusters that the clients
in the same group will share the same or similar behavior patterns that are to
satisfy the IID data assumption for most traditional machine learning
algorithms. Most of the existing clustering methods in FL treat every client
equally that ignores the different importance contributions among clients. This
paper proposes a novel weighted client-based clustered FL algorithm to leverage
the client's group and each client in a unified optimization framework.
Moreover, the paper proposes convergence analysis to the proposed clustered FL
method. The experimental analysis has demonstrated the effectiveness of the
proposed method.
Related papers
- Co-clustering for Federated Recommender System [33.70723179405055]
Federated Recommender System (FRS) offers a solution that strikes a balance between providing high-quality recommendations and preserving user privacy.
The presence of statistical heterogeneity in FRS, commonly observed due to personalized decision-making patterns, can pose challenges.
We propose CoFedRec, a novel Co-clustering Federated Recommendation mechanism.
arXiv Detail & Related papers (2024-11-03T21:32:07Z) - Equitable Federated Learning with Activation Clustering [5.116582735311639]
Federated learning is a prominent distributed learning paradigm that incorporates collaboration among diverse clients.
We propose an equitable clustering-based framework where the clients are categorized/clustered based on how similar they are to each other.
arXiv Detail & Related papers (2024-10-24T23:36:39Z) - A Bayesian Framework for Clustered Federated Learning [14.426129993432193]
One of the main challenges of federated learning (FL) is handling non-independent and identically distributed (non-IID) client data.
We present a unified Bayesian framework for clustered FL which associates clients to clusters.
This work provides insights on client-cluster associations and enables client knowledge sharing in new ways.
arXiv Detail & Related papers (2024-10-20T19:11:24Z) - Federated cINN Clustering for Accurate Clustered Federated Learning [33.72494731516968]
Federated Learning (FL) presents an innovative approach to privacy-preserving distributed machine learning.
We propose the Federated cINN Clustering Algorithm (FCCA) to robustly cluster clients into different groups.
arXiv Detail & Related papers (2023-09-04T10:47:52Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Efficient Distribution Similarity Identification in Clustered Federated
Learning via Principal Angles Between Client Data Subspaces [59.33965805898736]
Clustered learning has been shown to produce promising results by grouping clients into clusters.
Existing FL algorithms are essentially trying to group clients together with similar distributions.
Prior FL algorithms attempt similarities indirectly during training.
arXiv Detail & Related papers (2022-09-21T17:37:54Z) - FLIS: Clustered Federated Learning via Inference Similarity for Non-IID
Data Distribution [7.924081556869144]
We present a new algorithm, FLIS, which groups the clients population in clusters with jointly trainable data distributions.
We present experimental results to demonstrate the benefits of FLIS over the state-of-the-art benchmarks on CIFAR-100/10, SVHN, and FMNIST datasets.
arXiv Detail & Related papers (2022-08-20T22:10:48Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.