Personalized Federated Learning using Hypernetworks
- URL: http://arxiv.org/abs/2103.04628v1
- Date: Mon, 8 Mar 2021 09:29:08 GMT
- Title: Personalized Federated Learning using Hypernetworks
- Authors: Aviv Shamsian, Aviv Navon, Ethan Fetaya, Gal Chechik
- Abstract summary: We propose pFedHN for personalized Federated HyperNetworks.
In this approach, a central hypernetwork model is trained to generate a set of models, one model for each client.
We show that pFedHN can generalize better to new clients whose distributions differ from any client observed during training.
- Score: 26.329820911200546
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalized federated learning is tasked with training machine learning
models for multiple clients, each with its own data distribution. The goal is
to train personalized models in a collaborative way while accounting for data
disparities across clients and reducing communication costs. We propose a novel
approach to this problem using hypernetworks, termed pFedHN for personalized
Federated HyperNetworks. In this approach, a central hypernetwork model is
trained to generate a set of models, one model for each client. This
architecture provides effective parameter sharing across clients, while
maintaining the capacity to generate unique and diverse personal models.
Furthermore, since hypernetwork parameters are never transmitted, this approach
decouples the communication cost from the trainable model size. We test pFedHN
empirically in several personalized federated learning challenges and find that
it outperforms previous methods. Finally, since hypernetworks share information
across clients we show that pFedHN can generalize better to new clients whose
distributions differ from any client observed during training.
Related papers
- Multi-Level Additive Modeling for Structured Non-IID Federated Learning [54.53672323071204]
We train models organized in a multi-level structure, called Multi-level Additive Models (MAM)'', for better knowledge-sharing across heterogeneous clients.
In federated MAM (FeMAM), each client is assigned to at most one model per level and its personalized prediction sums up the outputs of models assigned to it across all levels.
Experiments show that FeMAM surpasses existing clustered FL and personalized FL methods in various non-IID settings.
arXiv Detail & Related papers (2024-05-26T07:54:53Z) - FedSheafHN: Personalized Federated Learning on Graph-structured Data [22.825083541211168]
We propose a model called FedSheafHN, which embeds each client's local subgraph into a server-constructed collaboration graph.
Our model improves the integration and interpretation of complex client characteristics.
It also has fast model convergence and effective new clients generalization.
arXiv Detail & Related papers (2024-05-25T04:51:41Z) - FAM: fast adaptive federated meta-learning [10.980548731600116]
We propose a fast adaptive federated meta-learning (FAM) framework for collaboratively learning a single global model.
A skeleton network is grown on each client to train a personalized model by learning additional client-specific parameters from local data.
The personalized client models outperformed the locally trained models, demonstrating the efficacy of the FAM mechanism.
arXiv Detail & Related papers (2023-08-26T22:54:45Z) - PeFLL: Personalized Federated Learning by Learning to Learn [16.161876130822396]
We present PeFLL, a new personalized federated learning algorithm that improves over the state-of-the-art in three aspects.
At the core of PeFLL lies a learning-to-learn approach that jointly trains an embedding network and a hypernetwork.
arXiv Detail & Related papers (2023-06-08T19:12:42Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - PerFED-GAN: Personalized Federated Learning via Generative Adversarial
Networks [46.17495529441229]
Federated learning is a distributed machine learning method that can be used to deploy AI-dependent IoT applications.
This paper proposes a federated learning method based on co-training and generative adversarial networks(GANs)
In our experiments, the proposed method outperforms the existing methods in mean test accuracy by 42% when the client's model architecture and data distribution vary significantly.
arXiv Detail & Related papers (2022-02-18T12:08:46Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - Personalized Federated Learning by Structured and Unstructured Pruning
under Data Heterogeneity [3.291862617649511]
We propose a new approach for obtaining a personalized model from a client-level objective.
To realize this personalization, we leverage finding a small subnetwork for each client.
arXiv Detail & Related papers (2021-05-02T22:10:46Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.