FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts
- URL: http://arxiv.org/abs/2306.08586v2
- Date: Wed, 4 Oct 2023 22:36:28 GMT
- Title: FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts
- Authors: Chen Dun, Mirian Hipolito Garcia, Guoqing Zheng, Ahmed Hassan
Awadallah, Robert Sim, Anastasios Kyrillidis, Dimitrios Dimitriadis
- Abstract summary: FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
- Score: 48.78037006856208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the goals in Federated Learning (FL) is to create personalized models
that can adapt to the context of each participating client, while utilizing
knowledge from a shared global model. Yet, often, personalization requires a
fine-tuning step using clients' labeled data in order to achieve good
performance. This may not be feasible in scenarios where incoming clients are
fresh and/or have privacy concerns. It, then, remains open how one can achieve
just-in-time personalization in these scenarios. We propose FedJETs, a novel
solution by using a Mixture-of-Experts (MoE) framework within a FL setup. Our
method leverages the diversity of the clients to train specialized experts on
different subsets of classes, and a gating function to route the input to the
most relevant expert(s). Our gating function harnesses the knowledge of a
pretrained model common expert to enhance its routing decisions on-the-fly. As
a highlight, our approach can improve accuracy up to 18\% in state of the art
FL settings, while maintaining competitive zero-shot performance. In practice,
our method can handle non-homogeneous data distributions, scale more
efficiently, and improve the state-of-the-art performance on common FL
benchmarks.
Related papers
- MAP: Model Aggregation and Personalization in Federated Learning with Incomplete Classes [49.22075916259368]
In some real-world applications, data samples are usually distributed on local devices.
In this paper, we focus on a special kind of Non-I.I.D. scene where clients own incomplete classes.
Our proposed algorithm named MAP could simultaneously achieve the aggregation and personalization goals in FL.
arXiv Detail & Related papers (2024-04-14T12:22:42Z) - Profit: Benchmarking Personalization and Robustness Trade-off in
Federated Prompt Tuning [40.16581292336117]
In many applications of federated learning (FL), clients desire models that are personalized using their local data, yet are also robust in the sense that they retain general global knowledge.
It is critical to understand how to navigate this personalization vs robustness trade-off when designing federated systems.
arXiv Detail & Related papers (2023-10-06T23:46:33Z) - Efficient Personalized Federated Learning via Sparse Model-Adaptation [47.088124462925684]
Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data.
We propose pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.
We show that pFedGate achieves superior global accuracy, individual accuracy and efficiency simultaneously over state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T12:21:34Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - PGFed: Personalize Each Client's Global Objective for Federated Learning [7.810284483002312]
We propose a novel personalized FL framework that enables each client to personalize its own global objective.
To avoid massive (O(N2)) communication overhead and potential privacy leakage, each client's risk is estimated through a first-order approximation for other clients' adaptive risk aggregation.
Our experiments on four datasets under different federated settings show consistent improvements of PGFed over previous state-of-the-art methods.
arXiv Detail & Related papers (2022-12-02T21:16:39Z) - Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives [37.42347737911428]
We propose personalized federated learning (FL) for learning personalized models without sharing private data.
We introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL.
This technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
arXiv Detail & Related papers (2022-08-12T09:51:20Z) - PFA: Privacy-preserving Federated Adaptation for Effective Model
Personalization [6.66389628571674]
Federated learning (FL) has become a prevalent distributed machine learning paradigm with improved privacy.
This paper introduces a new concept called federated adaptation, targeting at adapting the trained model in a federated manner to achieve better personalization results.
We propose PFA, a framework to accomplish Privacy-preserving Federated Adaptation.
arXiv Detail & Related papers (2021-03-02T08:07:34Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.