FedDRL: A Trustworthy Federated Learning Model Fusion Method Based on Staged Reinforcement Learning
- URL: http://arxiv.org/abs/2307.13716v4
- Date: Tue, 19 Mar 2024 11:21:07 GMT
- Title: FedDRL: A Trustworthy Federated Learning Model Fusion Method Based on Staged Reinforcement Learning
- Authors: Leiming Chen, Weishan Zhang, Cihao Dong, Sibo Qiao, Ziling Huang, Yuming Nie, Zhaoxiang Hou, Chee Wei Tan,
- Abstract summary: We propose FedDRL, a model fusion approach using reinforcement learning based on a two staged approach.
In the first stage, Our method could filter out malicious models and selects trusted client models to participate in the model fusion.
In the second stage, the FedDRL algorithm adaptively adjusts the weights of the trusted client models and aggregates the optimal global model.
- Score: 7.846139591790014
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Traditional federated learning uses the number of samples to calculate the weights of each client model and uses this fixed weight value to fusion the global model. However, in practical scenarios, each client's device and data heterogeneity leads to differences in the quality of each client's model. Thus the contribution to the global model is not wholly determined by the sample size. In addition, if clients intentionally upload low-quality or malicious models, using these models for aggregation will lead to a severe decrease in global model accuracy. Traditional federated learning algorithms do not address these issues. To solve this probelm, we propose FedDRL, a model fusion approach using reinforcement learning based on a two staged approach. In the first stage, Our method could filter out malicious models and selects trusted client models to participate in the model fusion. In the second stage, the FedDRL algorithm adaptively adjusts the weights of the trusted client models and aggregates the optimal global model. We also define five model fusion scenarios and compare our method with two baseline algorithms in those scenarios. The experimental results show that our algorithm has higher reliability than other algorithms while maintaining accuracy.
Related papers
- Dual-Criterion Model Aggregation in Federated Learning: Balancing Data Quantity and Quality [0.0]
Federated learning (FL) has become one of the key methods for privacy-preserving collaborative learning.
An aggregation algorithm is recognized as one of the most crucial components for ensuring the efficacy and security of the system.
This study proposes a novel dual-criterion weighted aggregation algorithm involving the quantity and quality of data from the client node.
arXiv Detail & Related papers (2024-11-12T14:09:16Z) - FusionBench: A Comprehensive Benchmark of Deep Model Fusion [78.80920533793595]
Deep model fusion is a technique that unifies the predictions or parameters of several deep neural networks into a single model.
FusionBench is the first comprehensive benchmark dedicated to deep model fusion.
arXiv Detail & Related papers (2024-06-05T13:54:28Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - FewFedWeight: Few-shot Federated Learning Framework across Multiple NLP
Tasks [38.68736962054861]
FewFedWeight is a few-shot federated learning framework across multiple tasks.
It trains client models in isolated devices without sharing data.
It can significantly improve the performance of client models on 61% tasks with an average performance improvement rate of 30.5% over the baseline.
arXiv Detail & Related papers (2022-12-16T09:01:56Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - RSCFed: Random Sampling Consensus Federated Semi-supervised Learning [40.998176838813045]
Federated semi-supervised learning (FSSL) aims to derive a global model by training fully-labeled and fully-unlabeled clients or training partially labeled clients.
We present a Random Sampling Consensus Federated learning, namely RSCFed, by considering the uneven reliability among models from fully-labeled clients, fully-unlabeled clients or partially labeled clients.
arXiv Detail & Related papers (2022-03-26T05:10:44Z) - A Personalized Federated Learning Algorithm: an Application in Anomaly
Detection [0.6700873164609007]
Federated Learning (FL) has recently emerged as a promising method to overcome data privacy and transmission issues.
In FL, datasets collected from different devices or sensors are used to train local models (clients) each of which shares its learning with a centralized model (server)
This paper proposes a novel Personalized FedAvg (PC-FedAvg) which aims to control weights communication and aggregation augmented with a tailored learning algorithm to personalize the resulting models at each client.
arXiv Detail & Related papers (2021-11-04T04:57:11Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.