Byzantine-Robust Aggregation for Securing Decentralized Federated
Learning
- URL: http://arxiv.org/abs/2409.17754v1
- Date: Thu, 26 Sep 2024 11:36:08 GMT
- Title: Byzantine-Robust Aggregation for Securing Decentralized Federated
Learning
- Authors: Diego Cajaraville-Aboy, Ana Fern\'andez-Vilas, Rebeca P.
D\'iaz-Redondo, and Manuel Fern\'andez-Veiga
- Abstract summary: Federated Learning (FL) emerges as a distributed machine learning approach that addresses privacy concerns by training AI models locally on devices.
Decentralized Federated Learning (DFL) extends the FL paradigm by eliminating the central server, thereby enhancing scalability and robustness through the avoidance of a single point of failure.
We present a novel Byzantine-robust aggregation algorithm to enhance the security of DFL environments, coined WFAgg.
- Score: 0.32985979395737774
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning (FL) emerges as a distributed machine learning approach
that addresses privacy concerns by training AI models locally on devices.
Decentralized Federated Learning (DFL) extends the FL paradigm by eliminating
the central server, thereby enhancing scalability and robustness through the
avoidance of a single point of failure. However, DFL faces significant
challenges in optimizing security, as most Byzantine-robust algorithms proposed
in the literature are designed for centralized scenarios. In this paper, we
present a novel Byzantine-robust aggregation algorithm to enhance the security
of Decentralized Federated Learning environments, coined WFAgg. This proposal
handles the adverse conditions and strength robustness of dynamic decentralized
topologies at the same time by employing multiple filters to identify and
mitigate Byzantine attacks. Experimental results demonstrate the effectiveness
of the proposed algorithm in maintaining model accuracy and convergence in the
presence of various Byzantine attack scenarios, outperforming state-of-the-art
centralized Byzantine-robust aggregation schemes (such as Multi-Krum or
Clustering). These algorithms are evaluated on an IID image classification
problem in both centralized and decentralized scenarios.
Related papers
- Impact of Network Topology on Byzantine Resilience in Decentralized Federated Learning [0.0]
This work investigates the effects of state-of-the-art Byzantine-robust aggregation methods in complex, large-scale network structures.
We find that state-of-the-art Byzantine robust aggregation strategies are not resilient within large non-fully connected networks.
arXiv Detail & Related papers (2024-07-06T17:47:44Z) - The Impact of Adversarial Node Placement in Decentralized Federated Learning Networks [6.661122374160369]
As Federated Learning (FL) grows in popularity, new decentralized frameworks are becoming widespread.
This paper analyzes the performance of decentralized FL for various adversarial placement strategies when adversaries can jointly coordinate their placement within a network.
We propose a novel attack algorithm that prioritizes adversarial spread over adversarial centrality by maximizing the average network distance between adversaries.
arXiv Detail & Related papers (2023-11-14T06:48:50Z) - Stability and Generalization of the Decentralized Stochastic Gradient
Descent Ascent Algorithm [80.94861441583275]
We investigate the complexity of the generalization bound of the decentralized gradient descent (D-SGDA) algorithm.
Our results analyze the impact of different top factors on the generalization of D-SGDA.
We also balance it with the generalization to obtain the optimal convex-concave setting.
arXiv Detail & Related papers (2023-10-31T11:27:01Z) - An Experimental Study of Byzantine-Robust Aggregation Schemes in
Federated Learning [4.627944480085717]
Byzantine-robust federated learning aims at mitigating Byzantine failures during the federated training process.
Several robust aggregation schemes have been proposed to defend against malicious updates from Byzantine clients.
We conduct an experimental study of Byzantine-robust aggregation schemes under different attacks using two popular algorithms in federated learning.
arXiv Detail & Related papers (2023-02-14T16:36:38Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Uncertainty-aware Clustering for Unsupervised Domain Adaptive Object
Re-identification [123.75412386783904]
State-of-the-art object Re-ID approaches adopt clustering algorithms to generate pseudo-labels for the unlabeled target domain.
We propose an uncertainty-aware clustering framework (UCF) for UDA tasks.
Our UCF method consistently achieves state-of-the-art performance in multiple UDA tasks for object Re-ID.
arXiv Detail & Related papers (2021-08-22T09:57:14Z) - Federated Learning for Intrusion Detection in IoT Security: A Hybrid
Ensemble Approach [0.0]
We first present an architecture for IDS based on hybrid ensemble model, named PHEC, which gives improved performance compared to state-of-the-art architectures.
Next, we propose Noise-Tolerant PHEC in centralized and federated settings to address the label-noise problem.
Experimental results on four benchmark datasets drawn from various security attacks show that our model achieves high TPR while keeping FPR low on noisy and clean data.
arXiv Detail & Related papers (2021-06-25T06:33:35Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z) - Decentralized MCTS via Learned Teammate Models [89.24858306636816]
We present a trainable online decentralized planning algorithm based on decentralized Monte Carlo Tree Search.
We show that deep learning and convolutional neural networks can be employed to produce accurate policy approximators.
arXiv Detail & Related papers (2020-03-19T13:10:20Z) - Byzantine-resilient Decentralized Stochastic Gradient Descent [85.15773446094576]
We present an in-depth study towards the Byzantine resilience of decentralized learning systems.
We propose UBAR, a novel algorithm to enhance decentralized learning with Byzantine Fault Tolerance.
arXiv Detail & Related papers (2020-02-20T05:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.