FAM: fast adaptive federated meta-learning
- URL: http://arxiv.org/abs/2308.13970v2
- Date: Fri, 1 Sep 2023 06:42:53 GMT
- Title: FAM: fast adaptive federated meta-learning
- Authors: Indrajeet Kumar Sinha, Shekhar Verma and Krishna Pratap Singh
- Abstract summary: We propose a fast adaptive federated meta-learning (FAM) framework for collaboratively learning a single global model.
A skeleton network is grown on each client to train a personalized model by learning additional client-specific parameters from local data.
The personalized client models outperformed the locally trained models, demonstrating the efficacy of the FAM mechanism.
- Score: 10.980548731600116
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose a fast adaptive federated meta-learning (FAM)
framework for collaboratively learning a single global model, which can then be
personalized locally on individual clients. Federated learning enables multiple
clients to collaborate to train a model without sharing data. Clients with
insufficient data or data diversity participate in federated learning to learn
a model with superior performance. Nonetheless, learning suffers when data
distributions diverge. There is a need to learn a global model that can be
adapted using client's specific information to create personalized models on
clients is required. MRI data suffers from this problem, wherein, one, due to
data acquisition challenges, local data at a site is sufficient for training an
accurate model and two, there is a restriction of data sharing due to privacy
concerns and three, there is a need for personalization of a learnt shared
global model on account of domain shift across client sites. The global model
is sparse and captures the common features in the MRI. This skeleton network is
grown on each client to train a personalized model by learning additional
client-specific parameters from local data. Experimental results show that the
personalization process at each client quickly converges using a limited number
of epochs. The personalized client models outperformed the locally trained
models, demonstrating the efficacy of the FAM mechanism. Additionally, the
sparse parameter set to be communicated during federated learning drastically
reduced communication overhead, which makes the scheme viable for networks with
limited resources.
Related papers
- Personalized Hierarchical Split Federated Learning in Wireless Networks [24.664469755746463]
We propose a personalized hierarchical split federated learning (PHSFL) algorithm that is specially designed to achieve better personalization performance.
We first perform extensive theoretical analysis to understand the impact of model splitting and hierarchical model aggregations on the global model.
Once the global model is trained, we fine-tune each client to obtain the personalized models.
arXiv Detail & Related papers (2024-11-09T02:41:53Z) - Multi-Level Additive Modeling for Structured Non-IID Federated Learning [54.53672323071204]
We train models organized in a multi-level structure, called Multi-level Additive Models (MAM)'', for better knowledge-sharing across heterogeneous clients.
In federated MAM (FeMAM), each client is assigned to at most one model per level and its personalized prediction sums up the outputs of models assigned to it across all levels.
Experiments show that FeMAM surpasses existing clustered FL and personalized FL methods in various non-IID settings.
arXiv Detail & Related papers (2024-05-26T07:54:53Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Visual Prompt Based Personalized Federated Learning [83.04104655903846]
We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
arXiv Detail & Related papers (2023-03-15T15:02:15Z) - PerFED-GAN: Personalized Federated Learning via Generative Adversarial
Networks [46.17495529441229]
Federated learning is a distributed machine learning method that can be used to deploy AI-dependent IoT applications.
This paper proposes a federated learning method based on co-training and generative adversarial networks(GANs)
In our experiments, the proposed method outperforms the existing methods in mean test accuracy by 42% when the client's model architecture and data distribution vary significantly.
arXiv Detail & Related papers (2022-02-18T12:08:46Z) - Personalized Federated Learning through Local Memorization [10.925242558525683]
Federated learning allows clients to collaboratively learn statistical models while keeping their data local.
Recent personalized federated learning methods train a separate model for each client while still leveraging the knowledge available at other clients.
We show on a suite of federated datasets that this approach achieves significantly higher accuracy and fairness than state-of-the-art methods.
arXiv Detail & Related papers (2021-11-17T19:40:07Z) - Federated Whole Prostate Segmentation in MRI with Personalized Neural
Architectures [11.563695244722613]
Federated learning (FL) is a way to train machine learning models without the need for centralized datasets.
In this work, we combine FL with an AutoML technique based on local neural architecture search by training a "supernet"
The proposed method is evaluated on four different datasets from 3D prostate MRI and shown to improve the local models' performance after adaptation.
arXiv Detail & Related papers (2021-07-16T20:35:29Z) - Personalized Federated Learning by Structured and Unstructured Pruning
under Data Heterogeneity [3.291862617649511]
We propose a new approach for obtaining a personalized model from a client-level objective.
To realize this personalization, we leverage finding a small subnetwork for each client.
arXiv Detail & Related papers (2021-05-02T22:10:46Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.