Multi-Model Federated Learning
- URL: http://arxiv.org/abs/2201.02582v1
- Date: Fri, 7 Jan 2022 18:24:23 GMT
- Title: Multi-Model Federated Learning
- Authors: Neelkamal Bhuyan and Sharayu Moharir
- Abstract summary: We extend federated learning to the setting where multiple unrelated models are trained simultaneously.
Every client is able to train any one of M models at a time and the server maintains a model for each of the M models which is typically a suitably averaged version of the model computed by the clients.
We propose multiple policies for assigning learning tasks to clients over time. In the first policy, we extend the widely studied FedAvg to multi-model learning by allotting models to clients in an i.i.d.
In addition, we propose two new policies for client selection in a multi-model setting which make decisions based on current
- Score: 8.629912408966145
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning is a form of distributed learning with the key challenge
being the non-identically distributed nature of the data in the participating
clients. In this paper, we extend federated learning to the setting where
multiple unrelated models are trained simultaneously. Specifically, every
client is able to train any one of M models at a time and the server maintains
a model for each of the M models which is typically a suitably averaged version
of the model computed by the clients. We propose multiple policies for
assigning learning tasks to clients over time. In the first policy, we extend
the widely studied FedAvg to multi-model learning by allotting models to
clients in an i.i.d. stochastic manner. In addition, we propose two new
policies for client selection in a multi-model federated setting which make
decisions based on current local losses for each client-model pair. We compare
the performance of the policies on tasks involving synthetic and real-world
data and characterize the performance of the proposed policies. The key
take-away from our work is that the proposed multi-model policies perform
better or at least as good as single model training using FedAvg.
Related papers
- Personalized Hierarchical Split Federated Learning in Wireless Networks [24.664469755746463]
We propose a personalized hierarchical split federated learning (PHSFL) algorithm that is specially designed to achieve better personalization performance.
We first perform extensive theoretical analysis to understand the impact of model splitting and hierarchical model aggregations on the global model.
Once the global model is trained, we fine-tune each client to obtain the personalized models.
arXiv Detail & Related papers (2024-11-09T02:41:53Z) - FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients [13.98392319567057]
Federated Learning (FL) is a distributed machine learning paradigm that achieves a globally robust model through decentralized computation and periodic model synthesis.
Despite their wide adoption, existing FL and PFL works have yet to comprehensively address the class-imbalance issue.
We propose FedReMa, an efficient PFL algorithm that can tackle class-imbalance by utilizing an adaptive inter-client co-learning approach.
arXiv Detail & Related papers (2024-11-04T05:44:28Z) - Leveraging Foundation Models for Multi-modal Federated Learning with Incomplete Modality [41.79433449873368]
We propose a novel multi-modal federated learning method, Federated Multi-modal contrastiVe training with Pre-trained completion (FedMVP)
FedMVP integrates the large-scale pre-trained models to enhance the federated training.
We demonstrate that the model achieves superior performance over two real-world image-text classification datasets.
arXiv Detail & Related papers (2024-06-16T19:18:06Z) - Multi-Level Additive Modeling for Structured Non-IID Federated Learning [54.53672323071204]
We train models organized in a multi-level structure, called Multi-level Additive Models (MAM)'', for better knowledge-sharing across heterogeneous clients.
In federated MAM (FeMAM), each client is assigned to at most one model per level and its personalized prediction sums up the outputs of models assigned to it across all levels.
Experiments show that FeMAM surpasses existing clustered FL and personalized FL methods in various non-IID settings.
arXiv Detail & Related papers (2024-05-26T07:54:53Z) - MAP: Model Aggregation and Personalization in Federated Learning with Incomplete Classes [49.22075916259368]
In some real-world applications, data samples are usually distributed on local devices.
In this paper, we focus on a special kind of Non-I.I.D. scene where clients own incomplete classes.
Our proposed algorithm named MAP could simultaneously achieve the aggregation and personalization goals in FL.
arXiv Detail & Related papers (2024-04-14T12:22:42Z) - Communication-Efficient Multimodal Federated Learning: Joint Modality
and Client Selection [14.261582708240407]
Multimodal Federated learning (FL) aims to enrich model training in FL settings where clients are collecting measurements across multiple modalities.
Key challenges to multimodal FL remain unaddressed, particularly in heterogeneous network settings.
We propose mmFedMC, a new FL methodology that can tackle the above-mentioned challenges in multimodal settings.
arXiv Detail & Related papers (2024-01-30T02:16:19Z) - Visual Prompt Based Personalized Federated Learning [83.04104655903846]
We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
arXiv Detail & Related papers (2023-03-15T15:02:15Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - PerFED-GAN: Personalized Federated Learning via Generative Adversarial
Networks [46.17495529441229]
Federated learning is a distributed machine learning method that can be used to deploy AI-dependent IoT applications.
This paper proposes a federated learning method based on co-training and generative adversarial networks(GANs)
In our experiments, the proposed method outperforms the existing methods in mean test accuracy by 42% when the client's model architecture and data distribution vary significantly.
arXiv Detail & Related papers (2022-02-18T12:08:46Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.