PerFED-GAN: Personalized Federated Learning via Generative Adversarial
Networks
- URL: http://arxiv.org/abs/2202.09155v1
- Date: Fri, 18 Feb 2022 12:08:46 GMT
- Title: PerFED-GAN: Personalized Federated Learning via Generative Adversarial
Networks
- Authors: Xingjian Cao, Gang Sun, Hongfang Yu, Mohsen Guizani
- Abstract summary: Federated learning is a distributed machine learning method that can be used to deploy AI-dependent IoT applications.
This paper proposes a federated learning method based on co-training and generative adversarial networks(GANs)
In our experiments, the proposed method outperforms the existing methods in mean test accuracy by 42% when the client's model architecture and data distribution vary significantly.
- Score: 46.17495529441229
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is gaining popularity as a distributed machine learning
method that can be used to deploy AI-dependent IoT applications while
protecting client data privacy and security. Due to the differences of clients,
a single global model may not perform well on all clients, so the personalized
federated learning method, which trains a personalized model for each client
that better suits its individual needs, becomes a research hotspot. Most
personalized federated learning research, however, focuses on data
heterogeneity while ignoring the need for model architecture heterogeneity.
Most existing federated learning methods uniformly set the model architecture
of all clients participating in federated learning, which is inconvenient for
each client's individual model and local data distribution requirements, and
also increases the risk of client model leakage. This paper proposes a
federated learning method based on co-training and generative adversarial
networks(GANs) that allows each client to design its own model to participate
in federated learning training independently without sharing any model
architecture or parameter information with other clients or a center. In our
experiments, the proposed method outperforms the existing methods in mean test
accuracy by 42% when the client's model architecture and data distribution vary
significantly.
Related papers
- Multi-Level Additive Modeling for Structured Non-IID Federated Learning [54.53672323071204]
We train models organized in a multi-level structure, called Multi-level Additive Models (MAM)'', for better knowledge-sharing across heterogeneous clients.
In federated MAM (FeMAM), each client is assigned to at most one model per level and its personalized prediction sums up the outputs of models assigned to it across all levels.
Experiments show that FeMAM surpasses existing clustered FL and personalized FL methods in various non-IID settings.
arXiv Detail & Related papers (2024-05-26T07:54:53Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Cross-Silo Federated Learning Across Divergent Domains with Iterative Parameter Alignment [4.95475852994362]
Federated learning is a method for training a machine learning model across remote clients.
We reformulate the typical federated learning setup to learn N models optimized for a common objective.
We find that the technique achieves competitive results on a variety of data partitions compared to state-of-the-art approaches.
arXiv Detail & Related papers (2023-11-08T16:42:14Z) - FAM: fast adaptive federated meta-learning [10.980548731600116]
We propose a fast adaptive federated meta-learning (FAM) framework for collaboratively learning a single global model.
A skeleton network is grown on each client to train a personalized model by learning additional client-specific parameters from local data.
The personalized client models outperformed the locally trained models, demonstrating the efficacy of the FAM mechanism.
arXiv Detail & Related papers (2023-08-26T22:54:45Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Personalized Federated Learning through Local Memorization [10.925242558525683]
Federated learning allows clients to collaboratively learn statistical models while keeping their data local.
Recent personalized federated learning methods train a separate model for each client while still leveraging the knowledge available at other clients.
We show on a suite of federated datasets that this approach achieves significantly higher accuracy and fairness than state-of-the-art methods.
arXiv Detail & Related papers (2021-11-17T19:40:07Z) - Subspace Learning for Personalized Federated Optimization [7.475183117508927]
We propose a method to address the problem of personalized learning in AI systems.
We show that our method achieves consistent gains both in personalized and unseen client evaluation settings.
arXiv Detail & Related papers (2021-09-16T00:03:23Z) - Personalized Federated Learning by Structured and Unstructured Pruning
under Data Heterogeneity [3.291862617649511]
We propose a new approach for obtaining a personalized model from a client-level objective.
To realize this personalization, we leverage finding a small subnetwork for each client.
arXiv Detail & Related papers (2021-05-02T22:10:46Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.