Factorized-FL: Agnostic Personalized Federated Learning with Kernel
Factorization & Similarity Matching
- URL: http://arxiv.org/abs/2202.00270v1
- Date: Tue, 1 Feb 2022 08:00:59 GMT
- Title: Factorized-FL: Agnostic Personalized Federated Learning with Kernel
Factorization & Similarity Matching
- Authors: Wonyong Jeong, Sung Ju Hwang
- Abstract summary: In real-world federated learning scenarios, participants could have their own personalized labels which are incompatible with those from other clients.
We introduce Factorized-FL, which allows to effectively tackle label- and task-heterogeneous federated learning settings.
We extensively validate our method on both label- and domain-heterogeneous settings, on which it outperforms the state-of-the-art personalized federated learning methods.
- Score: 70.95184015412798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In real-world federated learning scenarios, participants could have their own
personalized labels which are incompatible with those from other clients, due
to using different label permutations or tackling completely different tasks or
domains. However, most existing FL approaches cannot effectively tackle such
extremely heterogeneous scenarios since they often assume that (1) all
participants use a synchronized set of labels, and (2) they train on the same
task from the same domain. In this work, to tackle these challenges, we
introduce Factorized-FL, which allows to effectively tackle label- and
task-heterogeneous federated learning settings by factorizing the model
parameters into a pair of vectors, where one captures the common knowledge
across different labels and tasks and the other captures knowledge specific to
the task each local model tackles. Moreover, based on the distance in the
client-specific vector space, Factorized-FL performs selective aggregation
scheme to utilize only the knowledge from the relevant participants for each
client. We extensively validate our method on both label- and
domain-heterogeneous settings, on which it outperforms the state-of-the-art
personalized federated learning methods.
Related papers
- Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Rethinking Semi-Supervised Federated Learning: How to co-train
fully-labeled and fully-unlabeled client imaging data [6.322831694506287]
Isolated Federated Learning (IsoFed) is a learning scheme specifically designed for semi-supervised federated learning (SSFL)
We propose a novel learning scheme specifically designed for SSFL that circumvents the problem by avoiding simple averaging of supervised and semi-supervised models together.
In particular, our training approach consists of two parts - (a) isolated aggregation of labeled and unlabeled client models, and (b) local self-supervised pretraining of isolated global models in all clients.
arXiv Detail & Related papers (2023-10-28T20:41:41Z) - Federated Two Stage Decoupling With Adaptive Personalization Layers [5.69361786082969]
Federated learning has gained significant attention due to its ability to enable distributed learning while maintaining privacy constraints.
It inherently experiences significant learning degradation and slow convergence speed.
It is natural to employ the concept of clustering homogeneous clients into the same group, allowing only the model weights within each group to be aggregated.
arXiv Detail & Related papers (2023-08-30T07:46:32Z) - Masked Autoencoders are Efficient Continual Federated Learners [20.856520787551453]
Continual learning should be grounded in unsupervised learning of representations that are shared across clients.
Masked autoencoders for distribution estimation are particularly amenable to this setup.
arXiv Detail & Related papers (2023-06-06T09:38:57Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Federated Self-supervised Learning for Heterogeneous Clients [20.33482170846688]
We propose a unified and systematic framework, emphHeterogeneous Self-supervised Federated Learning (Hetero-SSFL) for enabling self-supervised learning with federation on heterogeneous clients.
The proposed framework allows representation learning across all the clients without imposing architectural constraints or requiring presence of labeled data.
We empirically demonstrate that our proposed approach outperforms the state of the art methods by a significant margin.
arXiv Detail & Related papers (2022-05-25T05:07:44Z) - On the Convergence of Clustered Federated Learning [57.934295064030636]
In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
arXiv Detail & Related papers (2022-02-13T02:39:19Z) - Practical One-Shot Federated Learning for Cross-Silo Setting [114.76232507580067]
One-shot federated learning is a promising approach to make federated learning applicable in cross-silo setting.
We propose a practical one-shot federated learning algorithm named FedKT.
By utilizing the knowledge transfer technique, FedKT can be applied to any classification models and can flexibly achieve differential privacy guarantees.
arXiv Detail & Related papers (2020-10-02T14:09:10Z) - Federated Semi-Supervised Learning with Inter-Client Consistency &
Disjoint Learning [78.88007892742438]
We study two essential scenarios of Federated Semi-Supervised Learning (FSSL) based on the location of the labeled data.
We propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch)
arXiv Detail & Related papers (2020-06-22T09:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.