Personalized Federated Learning with Contextual Modulation and
Meta-Learning
- URL: http://arxiv.org/abs/2312.15191v1
- Date: Sat, 23 Dec 2023 08:18:22 GMT
- Title: Personalized Federated Learning with Contextual Modulation and
Meta-Learning
- Authors: Anna Vettoruzzo, Mohamed-Rafik Bouguelia, Thorsteinn R\"ognvaldsson
- Abstract summary: Federated learning has emerged as a promising approach for training machine learning models on decentralized data sources.
We propose a novel framework that combines federated learning with meta-learning techniques to enhance both efficiency and generalization capabilities.
- Score: 2.7716102039510564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning has emerged as a promising approach for training machine
learning models on decentralized data sources while preserving data privacy.
However, challenges such as communication bottlenecks, heterogeneity of client
devices, and non-i.i.d. data distribution pose significant obstacles to
achieving optimal model performance. We propose a novel framework that combines
federated learning with meta-learning techniques to enhance both efficiency and
generalization capabilities. Our approach introduces a federated modulator that
learns contextual information from data batches and uses this knowledge to
generate modulation parameters. These parameters dynamically adjust the
activations of a base model, which operates using a MAML-based approach for
model personalization. Experimental results across diverse datasets highlight
the improvements in convergence speed and model performance compared to
existing federated learning approaches. These findings highlight the potential
of incorporating contextual information and meta-learning techniques into
federated learning, paving the way for advancements in distributed machine
learning paradigms.
Related papers
- Prioritizing Modalities: Flexible Importance Scheduling in Federated Multimodal Learning [5.421492821020181]
Federated Learning (FL) is a distributed machine learning approach that enables devices to collaboratively train models without sharing their local data.
Applying FL to real-world data presents challenges, particularly as most existing FL research focuses on unimodal data.
We propose FlexMod, a novel approach to enhance computational efficiency in MFL by adaptively allocating training resources for each modality encoder.
arXiv Detail & Related papers (2024-08-13T01:14:27Z) - Promoting Data and Model Privacy in Federated Learning through Quantized LoRA [41.81020951061438]
We introduce a method that just needs to distribute a quantized version of the model's parameters during training.
We combine this quantization strategy with LoRA, a popular and parameter-efficient fine-tuning method, to significantly reduce communication costs in federated learning.
The proposed framework, named textscFedLPP, successfully ensures both data and model privacy in the federated learning context.
arXiv Detail & Related papers (2024-06-16T15:23:07Z) - Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [65.15700861265432]
We present a parameter-efficient continual learning framework to alleviate long-term forgetting in incremental learning with vision-language models.
Our approach involves the dynamic expansion of a pre-trained CLIP model, through the integration of Mixture-of-Experts (MoE) adapters.
To preserve the zero-shot recognition capability of vision-language models, we introduce a Distribution Discriminative Auto-Selector.
arXiv Detail & Related papers (2024-03-18T08:00:23Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Meta Knowledge Condensation for Federated Learning [65.20774786251683]
Existing federated learning paradigms usually extensively exchange distributed models at a central solver to achieve a more powerful model.
This would incur severe communication burden between a server and multiple clients especially when data distributions are heterogeneous.
Unlike existing paradigms, we introduce an alternative perspective to significantly decrease the communication cost in federate learning.
arXiv Detail & Related papers (2022-09-29T15:07:37Z) - Heterogeneous Ensemble Knowledge Transfer for Training Large Models in
Federated Learning [22.310090483499035]
Federated learning (FL) enables edge-devices to collaboratively learn a model without disclosing their private data to a central aggregating server.
Most existing FL algorithms require models of identical architecture to be deployed across the clients and server.
We propose a novel ensemble knowledge transfer method named Fed-ET in which small models are trained on clients, and used to train a larger model at the server.
arXiv Detail & Related papers (2022-04-27T05:18:32Z) - A Personalized Federated Learning Algorithm: an Application in Anomaly
Detection [0.6700873164609007]
Federated Learning (FL) has recently emerged as a promising method to overcome data privacy and transmission issues.
In FL, datasets collected from different devices or sensors are used to train local models (clients) each of which shares its learning with a centralized model (server)
This paper proposes a novel Personalized FedAvg (PC-FedAvg) which aims to control weights communication and aggregation augmented with a tailored learning algorithm to personalize the resulting models at each client.
arXiv Detail & Related papers (2021-11-04T04:57:11Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.