Byzantines can also Learn from History: Fall of Centered Clipping in
Federated Learning
- URL: http://arxiv.org/abs/2208.09894v3
- Date: Mon, 1 Jan 2024 17:55:25 GMT
- Title: Byzantines can also Learn from History: Fall of Centered Clipping in
Federated Learning
- Authors: Kerem Ozfatura and Emre Ozfatura and Alptekin Kupcu and Deniz Gunduz
- Abstract summary: In this work, we introduce a novel attack strategy that can circumvent the defences of CC framework.
We also propose a new robust and fast defence mechanism that is effective against the proposed and other existing Byzantine attacks.
- Score: 6.974212769362569
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing popularity of the federated learning (FL) framework due to its
success in a wide range of collaborative learning tasks also induces certain
security concerns. Among many vulnerabilities, the risk of Byzantine attacks is
of particular concern, which refers to the possibility of malicious clients
participating in the learning process. Hence, a crucial objective in FL is to
neutralize the potential impact of Byzantine attacks and to ensure that the
final model is trustable. It has been observed that the higher the variance
among the clients' models/updates, the more space there is for Byzantine
attacks to be hidden. As a consequence, by utilizing momentum, and thus,
reducing the variance, it is possible to weaken the strength of known Byzantine
attacks. The centered clipping (CC) framework has further shown that the
momentum term from the previous iteration, besides reducing the variance, can
be used as a reference point to neutralize Byzantine attacks better. In this
work, we first expose vulnerabilities of the CC framework, and introduce a
novel attack strategy that can circumvent the defences of CC and other robust
aggregators and reduce their test accuracy up to %33 on best-case scenarios in
image classification tasks. Then, we propose a new robust and fast defence
mechanism that is effective against the proposed and other existing Byzantine
attacks.
Related papers
- Byzantine-Robust Federated Learning: An Overview With Focus on Developing Sybil-based Attacks to Backdoor Augmented Secure Aggregation Protocols [0.0]
Federated Learning (FL) paradigms enable large numbers of clients to collaboratively train Machine Learning models on private data.
Traditional FL schemes are left vulnerable to Byzantine attacks that attempt to hurt model performance by injecting malicious backdoors.
This paper provides a exhaustive and updated taxonomy of existing methods and frameworks, before zooming in and conducting an in-depth analysis of the strengths and weaknesses of the Robustness of Federated Learning protocol.
We propose two novel Sybil-based attacks that take advantage of vulnerabilities in RoFL.
arXiv Detail & Related papers (2024-10-30T04:20:22Z) - Understanding Byzantine Robustness in Federated Learning with A Black-box Server [47.98788470469994]
Federated learning (FL) becomes vulnerable to Byzantine attacks where some of participators tend to damage the utility or discourage the convergence of the learned model.
Previous works propose to apply robust rules to aggregate updates from participators against different types of Byzantine attacks.
In practice, FL systems can involve a black-box server that makes the adopted aggregation rule inaccessible to participants, which can naturally defend or weaken some Byzantine attacks.
arXiv Detail & Related papers (2024-08-12T10:18:24Z) - Privacy-Preserving Aggregation for Decentralized Learning with Byzantine-Robustness [5.735144760031169]
Byzantine clients intentionally disrupt the learning process by broadcasting arbitrary model updates to other clients.
In this paper, we introduce SecureDL, a novel DL protocol designed to enhance the security and privacy of DL against Byzantine threats.
Our experiments show that SecureDL is effective even in the case of attacks by the malicious majority.
arXiv Detail & Related papers (2024-04-27T18:17:36Z) - Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [49.242828934501986]
Multimodal contrastive learning has emerged as a powerful paradigm for building high-quality features.
backdoor attacks subtly embed malicious behaviors within the model during training.
We introduce an innovative token-based localized forgetting training regime.
arXiv Detail & Related papers (2024-03-24T18:33:15Z) - Byzantine-Robust Federated Learning with Variance Reduction and
Differential Privacy [6.343100139647636]
Federated learning (FL) is designed to preserve data privacy during model training.
FL is vulnerable to privacy attacks and Byzantine attacks.
We propose a new FL scheme that guarantees rigorous privacy and simultaneously enhances system robustness against Byzantine attacks.
arXiv Detail & Related papers (2023-09-07T01:39:02Z) - An Experimental Study of Byzantine-Robust Aggregation Schemes in
Federated Learning [4.627944480085717]
Byzantine-robust federated learning aims at mitigating Byzantine failures during the federated training process.
Several robust aggregation schemes have been proposed to defend against malicious updates from Byzantine clients.
We conduct an experimental study of Byzantine-robust aggregation schemes under different attacks using two popular algorithms in federated learning.
arXiv Detail & Related papers (2023-02-14T16:36:38Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Learning from History for Byzantine Robust Optimization [52.68913869776858]
Byzantine robustness has received significant attention recently given its importance for distributed learning.
We show that most existing robust aggregation rules may not converge even in the absence of any Byzantine attackers.
arXiv Detail & Related papers (2020-12-18T16:22:32Z) - Byzantine-resilient Decentralized Stochastic Gradient Descent [85.15773446094576]
We present an in-depth study towards the Byzantine resilience of decentralized learning systems.
We propose UBAR, a novel algorithm to enhance decentralized learning with Byzantine Fault Tolerance.
arXiv Detail & Related papers (2020-02-20T05:11:04Z) - Federated Variance-Reduced Stochastic Gradient Descent with Robustness
to Byzantine Attacks [74.36161581953658]
This paper deals with distributed finite-sum optimization for learning over networks in the presence of malicious Byzantine attacks.
To cope with such attacks, most resilient approaches so far combine gradient descent (SGD) with different robust aggregation rules.
The present work puts forth a Byzantine attack resilient distributed (Byrd-) SAGA approach for learning tasks involving finite-sum optimization over networks.
arXiv Detail & Related papers (2019-12-29T19:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.