Linear Scalarization for Byzantine-robust learning on non-IID data
- URL: http://arxiv.org/abs/2210.08287v1
- Date: Sat, 15 Oct 2022 13:24:00 GMT
- Title: Linear Scalarization for Byzantine-robust learning on non-IID data
- Authors: Latifa Errami, El Houcine Bergou
- Abstract summary: We study the problem of Byzantine-robust learning when data among clients is heterogeneous.
We propose the use of Linear Scalarization (LS) as an enhancing method to enable current defenses to circumvent Byzantine attacks in the non-IID setting.
- Score: 3.098066034546503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we study the problem of Byzantine-robust learning when data
among clients is heterogeneous. We focus on poisoning attacks targeting the
convergence of SGD. Although this problem has received great attention; the
main Byzantine defenses rely on the IID assumption causing them to fail when
data distribution is non-IID even with no attack. We propose the use of Linear
Scalarization (LS) as an enhancing method to enable current defenses to
circumvent Byzantine attacks in the non-IID setting. The LS method is based on
the incorporation of a trade-off vector that penalizes the suspected malicious
clients. Empirical analysis corroborates that the proposed LS variants are
viable in the IID setting. For mild to strong non-IID data splits, LS is either
comparable or outperforming current approaches under state-of-the-art Byzantine
attack scenarios.
Related papers
- DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - An Experimental Study of Byzantine-Robust Aggregation Schemes in
Federated Learning [4.627944480085717]
Byzantine-robust federated learning aims at mitigating Byzantine failures during the federated training process.
Several robust aggregation schemes have been proposed to defend against malicious updates from Byzantine clients.
We conduct an experimental study of Byzantine-robust aggregation schemes under different attacks using two popular algorithms in federated learning.
arXiv Detail & Related papers (2023-02-14T16:36:38Z) - Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting [58.91947205027892]
Federated learning has exhibited vulnerabilities to Byzantine attacks.
Byzantine attackers can send arbitrary gradients to a central server to destroy the convergence and performance of the global model.
A wealth of robust AGgregation Rules (AGRs) have been proposed to defend against Byzantine attacks.
arXiv Detail & Related papers (2023-02-13T03:31:50Z) - Byzantine-Robust Federated Learning via Credibility Assessment on
Non-IID Data [1.4146420810689422]
Federated learning is a novel framework that enables resource-constrained edge devices to jointly learn a model.
Standard federated learning is vulnerable to Byzantine attacks.
We propose a Byzantine-robust framework for federated learning via credibility assessment on non-iid data.
arXiv Detail & Related papers (2021-09-06T12:18:02Z) - Learning from History for Byzantine Robust Optimization [52.68913869776858]
Byzantine robustness has received significant attention recently given its importance for distributed learning.
We show that most existing robust aggregation rules may not converge even in the absence of any Byzantine attackers.
arXiv Detail & Related papers (2020-12-18T16:22:32Z) - Defending Distributed Classifiers Against Data Poisoning Attacks [26.89258745198076]
Support Vector Machines (SVMs) are vulnerable to targeted training data manipulations.
We develop a novel defense algorithm that improves resistance against such attacks.
arXiv Detail & Related papers (2020-08-21T03:11:23Z) - Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing [55.012801269326594]
In Byzantine robust distributed learning, a central server wants to train a machine learning model over data distributed across multiple workers.
A fraction of these workers may deviate from the prescribed algorithm and send arbitrary messages.
We propose a simple bucketing scheme that adapts existing robust algorithms to heterogeneous datasets at a negligible computational cost.
arXiv Detail & Related papers (2020-06-16T17:58:53Z) - Federated Variance-Reduced Stochastic Gradient Descent with Robustness
to Byzantine Attacks [74.36161581953658]
This paper deals with distributed finite-sum optimization for learning over networks in the presence of malicious Byzantine attacks.
To cope with such attacks, most resilient approaches so far combine gradient descent (SGD) with different robust aggregation rules.
The present work puts forth a Byzantine attack resilient distributed (Byrd-) SAGA approach for learning tasks involving finite-sum optimization over networks.
arXiv Detail & Related papers (2019-12-29T19:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.