On Bridging Generic and Personalized Federated Learning
- URL: http://arxiv.org/abs/2107.00778v1
- Date: Fri, 2 Jul 2021 00:25:48 GMT
- Title: On Bridging Generic and Personalized Federated Learning
- Authors: Hong-You Chen, Wei-Lun Chao
- Abstract summary: We propose a novel federated learning framework that explicitly decouples a model's dual duties with two prediction tasks.
With this two-loss, two-predictor framework we name Federated Robust Decoupling Fed-RoD, the learned model can simultaneously achieve state-of-the-art generic and personalized performance.
- Score: 18.989191579101586
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated learning is promising for its ability to collaboratively train
models with multiple clients without accessing their data, but vulnerable when
clients' data distributions diverge from each other. This divergence further
leads to a dilemma: "Should we prioritize the learned model's generic
performance (for future use at the server) or its personalized performance (for
each client)?" These two, seemingly competing goals have divided the community
to focus on one or the other, yet in this paper we show that it is possible to
approach both at the same time. Concretely, we propose a novel federated
learning framework that explicitly decouples a model's dual duties with two
prediction tasks. On the one hand, we introduce a family of losses that are
robust to non-identical class distributions, enabling clients to train a
generic predictor with a consistent objective across them. On the other hand,
we formulate the personalized predictor as a lightweight adaptive module that
is learned to minimize each client's empirical risk on top of the generic
predictor. With this two-loss, two-predictor framework which we name Federated
Robust Decoupling Fed-RoD, the learned model can simultaneously achieve
state-of-the-art generic and personalized performance, essentially bridging the
two tasks.
Related papers
- Emulating Full Participation: An Effective and Fair Client Selection Strategy for Federated Learning [50.060154488277036]
In federated learning, client selection is a critical problem that significantly impacts both model performance and fairness.
We propose two guiding principles that tackle the inherent conflict between the two metrics while reinforcing each other.
Our approach adaptively enhances this diversity by selecting clients based on their data distributions, thereby improving both model performance and fairness.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Robust Training of Federated Models with Extremely Label Deficiency [84.00832527512148]
Federated semi-supervised learning (FSSL) has emerged as a powerful paradigm for collaboratively training machine learning models using distributed data with label deficiency.
We propose a novel twin-model paradigm, called Twin-sight, designed to enhance mutual guidance by providing insights from different perspectives of labeled and unlabeled data.
Our comprehensive experiments on four benchmark datasets provide substantial evidence that Twin-sight can significantly outperform state-of-the-art methods across various experimental settings.
arXiv Detail & Related papers (2024-02-22T10:19:34Z) - Personalized Federated Learning via Amortized Bayesian Meta-Learning [21.126405589760367]
We introduce a new perspective on personalized federated learning through Amortized Bayesian Meta-Learning.
Specifically, we propose a novel algorithm called emphFedABML, which employs hierarchical variational inference across clients.
Our theoretical analysis provides an upper bound on the average generalization error and guarantees the generalization performance on unseen data.
arXiv Detail & Related papers (2023-07-05T11:58:58Z) - PeFLL: Personalized Federated Learning by Learning to Learn [16.161876130822396]
We present PeFLL, a new personalized federated learning algorithm that improves over the state-of-the-art in three aspects.
At the core of PeFLL lies a learning-to-learn approach that jointly trains an embedding network and a hypernetwork.
arXiv Detail & Related papers (2023-06-08T19:12:42Z) - Prototype Helps Federated Learning: Towards Faster Convergence [38.517903009319994]
Federated learning (FL) is a distributed machine learning technique in which multiple clients cooperate to train a shared model without exchanging their raw data.
In this paper, a prototype-based federated learning framework is proposed, which can achieve better inference performance with only a few changes to the last global iteration of the typical federated learning process.
arXiv Detail & Related papers (2023-03-22T04:06:29Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Federated Self-supervised Learning for Heterogeneous Clients [20.33482170846688]
We propose a unified and systematic framework, emphHeterogeneous Self-supervised Federated Learning (Hetero-SSFL) for enabling self-supervised learning with federation on heterogeneous clients.
The proposed framework allows representation learning across all the clients without imposing architectural constraints or requiring presence of labeled data.
We empirically demonstrate that our proposed approach outperforms the state of the art methods by a significant margin.
arXiv Detail & Related papers (2022-05-25T05:07:44Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.