Federated Mutual Learning
- URL: http://arxiv.org/abs/2006.16765v3
- Date: Thu, 17 Sep 2020 06:10:24 GMT
- Title: Federated Mutual Learning
- Authors: Tao Shen, Jie Zhang, Xinkang Jia, Fengda Zhang, Gang Huang, Pan Zhou,
Kun Kuang, Fei Wu, Chao Wu
- Abstract summary: Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
- Score: 65.46254760557073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) enables collaboratively training deep learning models
on decentralized data. However, there are three types of heterogeneities in FL
setting bringing about distinctive challenges to the canonical federated
learning algorithm (FedAvg). First, due to the Non-IIDness of data, the global
shared model may perform worse than local models that solely trained on their
private data; Second, the objective of center server and clients may be
different, where center server seeks for a generalized model whereas client
pursue a personalized model, and clients may run different tasks; Third,
clients may need to design their customized model for various scenes and tasks;
In this work, we present a novel federated learning paradigm, named Federated
Mutual Leaning (FML), dealing with the three heterogeneities. FML allows
clients training a generalized model collaboratively and a personalized model
independently, and designing their private customized models. Thus, the
Non-IIDness of data is no longer a bug but a feature that clients can be
personally served better. The experiments show that FML can achieve better
performance than alternatives in typical FL setting, and clients can be
benefited from FML with different models and tasks.
Related papers
- Personalized Hierarchical Split Federated Learning in Wireless Networks [24.664469755746463]
We propose a personalized hierarchical split federated learning (PHSFL) algorithm that is specially designed to achieve better personalization performance.
We first perform extensive theoretical analysis to understand the impact of model splitting and hierarchical model aggregations on the global model.
Once the global model is trained, we fine-tune each client to obtain the personalized models.
arXiv Detail & Related papers (2024-11-09T02:41:53Z) - Multi-Level Additive Modeling for Structured Non-IID Federated Learning [54.53672323071204]
We train models organized in a multi-level structure, called Multi-level Additive Models (MAM)'', for better knowledge-sharing across heterogeneous clients.
In federated MAM (FeMAM), each client is assigned to at most one model per level and its personalized prediction sums up the outputs of models assigned to it across all levels.
Experiments show that FeMAM surpasses existing clustered FL and personalized FL methods in various non-IID settings.
arXiv Detail & Related papers (2024-05-26T07:54:53Z) - MAP: Model Aggregation and Personalization in Federated Learning with Incomplete Classes [49.22075916259368]
In some real-world applications, data samples are usually distributed on local devices.
In this paper, we focus on a special kind of Non-I.I.D. scene where clients own incomplete classes.
Our proposed algorithm named MAP could simultaneously achieve the aggregation and personalization goals in FL.
arXiv Detail & Related papers (2024-04-14T12:22:42Z) - Client-supervised Federated Learning: Towards One-model-for-all Personalization [28.574858341430858]
We propose a novel federated learning framework to learn only one robust global model to achieve competitive performance to those personalized models on unseen/test clients in the FL system.
Specifically, we design a new Client-Supervised Federated Learning (FedCS) to unravel clients' bias on instances' latent representations so that the global model can learn both client-specific and client-agnostic knowledge.
arXiv Detail & Related papers (2024-03-28T15:29:19Z) - FAM: fast adaptive federated meta-learning [10.980548731600116]
We propose a fast adaptive federated meta-learning (FAM) framework for collaboratively learning a single global model.
A skeleton network is grown on each client to train a personalized model by learning additional client-specific parameters from local data.
The personalized client models outperformed the locally trained models, demonstrating the efficacy of the FAM mechanism.
arXiv Detail & Related papers (2023-08-26T22:54:45Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives [37.42347737911428]
We propose personalized federated learning (FL) for learning personalized models without sharing private data.
We introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL.
This technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
arXiv Detail & Related papers (2022-08-12T09:51:20Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - Personalized Federated Learning by Structured and Unstructured Pruning
under Data Heterogeneity [3.291862617649511]
We propose a new approach for obtaining a personalized model from a client-level objective.
To realize this personalization, we leverage finding a small subnetwork for each client.
arXiv Detail & Related papers (2021-05-02T22:10:46Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.