An Experimental Study of Byzantine-Robust Aggregation Schemes in
Federated Learning
- URL: http://arxiv.org/abs/2302.07173v1
- Date: Tue, 14 Feb 2023 16:36:38 GMT
- Title: An Experimental Study of Byzantine-Robust Aggregation Schemes in
Federated Learning
- Authors: Shenghui Li, Edith C.-H. Ngai, Thiemo Voigt
- Abstract summary: Byzantine-robust federated learning aims at mitigating Byzantine failures during the federated training process.
Several robust aggregation schemes have been proposed to defend against malicious updates from Byzantine clients.
We conduct an experimental study of Byzantine-robust aggregation schemes under different attacks using two popular algorithms in federated learning.
- Score: 4.627944480085717
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Byzantine-robust federated learning aims at mitigating Byzantine failures
during the federated training process, where malicious participants may upload
arbitrary local updates to the central server to degrade the performance of the
global model. In recent years, several robust aggregation schemes have been
proposed to defend against malicious updates from Byzantine clients and improve
the robustness of federated learning. These solutions were claimed to be
Byzantine-robust, under certain assumptions. Other than that, new attack
strategies are emerging, striving to circumvent the defense schemes. However,
there is a lack of systematic comparison and empirical study thereof. In this
paper, we conduct an experimental study of Byzantine-robust aggregation schemes
under different attacks using two popular algorithms in federated learning,
FedSGD and FedAvg . We first survey existing Byzantine attack strategies and
Byzantine-robust aggregation schemes that aim to defend against Byzantine
attacks. We also propose a new scheme, ClippedClustering , to enhance the
robustness of a clustering-based scheme by automatically clipping the updates.
Then we provide an experimental evaluation of eight aggregation schemes in the
scenario of five different Byzantine attacks. Our results show that these
aggregation schemes sustain relatively high accuracy in some cases but are
ineffective in others. In particular, our proposed ClippedClustering
successfully defends against most attacks under independent and IID local
datasets. However, when the local datasets are Non-IID, the performance of all
the aggregation schemes significantly decreases. With Non-IID data, some of
these aggregation schemes fail even in the complete absence of Byzantine
clients. We conclude that the robustness of all the aggregation schemes is
limited, highlighting the need for new defense strategies, in particular for
Non-IID datasets.
Related papers
- Byzantine-Robust Federated Learning: An Overview With Focus on Developing Sybil-based Attacks to Backdoor Augmented Secure Aggregation Protocols [0.0]
Federated Learning (FL) paradigms enable large numbers of clients to collaboratively train Machine Learning models on private data.
Traditional FL schemes are left vulnerable to Byzantine attacks that attempt to hurt model performance by injecting malicious backdoors.
This paper provides a exhaustive and updated taxonomy of existing methods and frameworks, before zooming in and conducting an in-depth analysis of the strengths and weaknesses of the Robustness of Federated Learning protocol.
We propose two novel Sybil-based attacks that take advantage of vulnerabilities in RoFL.
arXiv Detail & Related papers (2024-10-30T04:20:22Z) - Byzantine-Robust Aggregation for Securing Decentralized Federated
Learning [0.32985979395737774]
Federated Learning (FL) emerges as a distributed machine learning approach that addresses privacy concerns by training AI models locally on devices.
Decentralized Federated Learning (DFL) extends the FL paradigm by eliminating the central server, thereby enhancing scalability and robustness through the avoidance of a single point of failure.
We present a novel Byzantine-robust aggregation algorithm to enhance the security of DFL environments, coined WFAgg.
arXiv Detail & Related papers (2024-09-26T11:36:08Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - Byzantines can also Learn from History: Fall of Centered Clipping in
Federated Learning [6.974212769362569]
In this work, we introduce a novel attack strategy that can circumvent the defences of CC framework.
We also propose a new robust and fast defence mechanism that is effective against the proposed and other existing Byzantine attacks.
arXiv Detail & Related papers (2022-08-21T14:39:30Z) - MixTailor: Mixed Gradient Aggregation for Robust Learning Against
Tailored Attacks [32.8090455006524]
We introduce MixTailor, a scheme based on randomization of the aggregation strategies that makes it impossible for the attacker to be fully informed.
Our empirical studies across various datasets, attacks, and settings, validate our hypothesis and show that MixTailor successfully defends when well-known Byzantine-tolerant schemes fail.
arXiv Detail & Related papers (2022-07-16T13:30:37Z) - Robust Federated Learning via Over-The-Air Computation [48.47690125123958]
Simple averaging of model updates via over-the-air computation makes the learning task vulnerable to random or intended modifications of the local model updates of some malicious clients.
We propose a robust transmission and aggregation framework to such attacks while preserving the benefits of over-the-air computation for federated learning.
arXiv Detail & Related papers (2021-11-01T19:21:21Z) - Hybrid Dynamic Contrast and Probability Distillation for Unsupervised
Person Re-Id [109.1730454118532]
Unsupervised person re-identification (Re-Id) has attracted increasing attention due to its practical application in the read-world video surveillance system.
We present the hybrid dynamic cluster contrast and probability distillation algorithm.
It formulates the unsupervised Re-Id problem into an unified local-to-global dynamic contrastive learning and self-supervised probability distillation framework.
arXiv Detail & Related papers (2021-09-29T02:56:45Z) - Byzantine-Robust Federated Learning via Credibility Assessment on
Non-IID Data [1.4146420810689422]
Federated learning is a novel framework that enables resource-constrained edge devices to jointly learn a model.
Standard federated learning is vulnerable to Byzantine attacks.
We propose a Byzantine-robust framework for federated learning via credibility assessment on non-iid data.
arXiv Detail & Related papers (2021-09-06T12:18:02Z) - Learning from History for Byzantine Robust Optimization [52.68913869776858]
Byzantine robustness has received significant attention recently given its importance for distributed learning.
We show that most existing robust aggregation rules may not converge even in the absence of any Byzantine attackers.
arXiv Detail & Related papers (2020-12-18T16:22:32Z) - Byzantine-resilient Decentralized Stochastic Gradient Descent [85.15773446094576]
We present an in-depth study towards the Byzantine resilience of decentralized learning systems.
We propose UBAR, a novel algorithm to enhance decentralized learning with Byzantine Fault Tolerance.
arXiv Detail & Related papers (2020-02-20T05:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.