DearFSAC: An Approach to Optimizing Unreliable Federated Learning via
Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2201.12701v1
- Date: Sun, 30 Jan 2022 01:47:43 GMT
- Title: DearFSAC: An Approach to Optimizing Unreliable Federated Learning via
Deep Reinforcement Learning
- Authors: Chenghao Huang, Weilong Chen, Yuxi Chen, Shunji Yang and Yanru Zhang
- Abstract summary: We propose the DEfect-AwaRe federated soft actor-critic (DearFSAC) to dynamically assign weights to local models to improve the robustness of federated learning.
DearFSAC outperforms three existing approaches on four datasets for both independent and identically distributed (IID) and non-IID settings.
- Score: 3.516494777812123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In federated learning (FL), model aggregation has been widely adopted for
data privacy. In recent years, assigning different weights to local models has
been used to alleviate the FL performance degradation caused by differences
between local datasets. However, when various defects make the FL process
unreliable, most existing FL approaches expose weak robustness. In this paper,
we propose the DEfect-AwaRe federated soft actor-critic (DearFSAC) to
dynamically assign weights to local models to improve the robustness of FL. The
deep reinforcement learning algorithm soft actor-critic is adopted for
near-optimal performance and stable convergence. Besides, an auto-encoder is
trained to output low-dimensional embedding vectors that are further utilized
to evaluate model quality. In the experiments, DearFSAC outperforms three
existing approaches on four datasets for both independent and identically
distributed (IID) and non-IID settings under defective scenarios.
Related papers
- Unlocking the Potential of Model Calibration in Federated Learning [15.93119575317457]
We propose Non-Uniform for Federated Learning (NUCFL), a generic framework that integrates FL with the concept of model calibration.
OurFL addresses this challenge by dynamically adjusting the model calibration based on relationships between each clients local model and the global model in FL.
By doing so,FL effectively aligns calibration needs for the global model while not sacrificing accuracy.
arXiv Detail & Related papers (2024-09-07T20:11:11Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - AEDFL: Efficient Asynchronous Decentralized Federated Learning with
Heterogeneous Devices [61.66943750584406]
We propose an Asynchronous Efficient Decentralized FL framework, i.e., AEDFL, in heterogeneous environments.
First, we propose an asynchronous FL system model with an efficient model aggregation method for improving the FL convergence.
Second, we propose a dynamic staleness-aware model update approach to achieve superior accuracy.
Third, we propose an adaptive sparse training method to reduce communication and computation costs without significant accuracy degradation.
arXiv Detail & Related papers (2023-12-18T05:18:17Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Reliable Federated Disentangling Network for Non-IID Domain Feature [62.73267904147804]
In this paper, we propose a novel reliable federated disentangling network, termed RFedDis.
To the best of our knowledge, our proposed RFedDis is the first work to develop an FL approach based on evidential uncertainty combined with feature disentangling.
Our proposed RFedDis provides outstanding performance with a high degree of reliability as compared to other state-of-the-art FL approaches.
arXiv Detail & Related papers (2023-01-30T11:46:34Z) - FedCos: A Scene-adaptive Federated Optimization Enhancement for
Performance Improvement [11.687451505965655]
We propose FedCos, which reduces the directional inconsistency of local models by introducing a cosine-similarity penalty.
We show that FedCos outperforms the well-known baselines and can enhance them under a variety of FL scenes.
With the help of FedCos, multiple FL methods require significantly fewer communication rounds than before to obtain a model with comparable performance.
arXiv Detail & Related papers (2022-04-07T02:59:54Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - FedCAT: Towards Accurate Federated Learning via Device Concatenation [4.416919766772866]
Federated Learning (FL) enables all the involved devices to train a global model collaboratively without exposing their local data privacy.
For non-IID scenarios, the classification accuracy of FL models decreases drastically due to the weight divergence caused by data heterogeneity.
We introduce a novel FL approach named Fed-Cat that can achieve high model accuracy based on our proposed device selection strategy and device concatenation-based local training method.
arXiv Detail & Related papers (2022-02-23T10:08:43Z) - Efficient Federated Learning for AIoT Applications Using Knowledge
Distillation [2.5892786553124085]
Federated Learning (FL) trains a central model with decentralized data without compromising user privacy.
Traditional FL suffers from model inaccuracy since it trains local models using hard labels of data.
This paper presents a novel Distillation-based Federated Learning architecture that enables efficient and accurate FL for AIoT applications.
arXiv Detail & Related papers (2021-11-29T06:40:42Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.