Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting
- URL: http://arxiv.org/abs/2302.06079v2
- Date: Sun, 4 Jun 2023 12:58:35 GMT
- Title: Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting
- Authors: Yuchen Liu, Chen Chen, Lingjuan Lyu, Fangzhao Wu, Sai Wu, Gang Chen
- Abstract summary: Federated learning has exhibited vulnerabilities to Byzantine attacks.
Byzantine attackers can send arbitrary gradients to a central server to destroy the convergence and performance of the global model.
A wealth of robust AGgregation Rules (AGRs) have been proposed to defend against Byzantine attacks.
- Score: 58.91947205027892
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning has exhibited vulnerabilities to Byzantine attacks, where
the Byzantine attackers can send arbitrary gradients to a central server to
destroy the convergence and performance of the global model. A wealth of robust
AGgregation Rules (AGRs) have been proposed to defend against Byzantine
attacks. However, Byzantine clients can still circumvent robust AGRs when data
is non-Identically and Independently Distributed (non-IID). In this paper, we
first reveal the root causes of performance degradation of current robust AGRs
in non-IID settings: the curse of dimensionality and gradient heterogeneity. In
order to address this issue, we propose GAS, a \shorten approach that can
successfully adapt existing robust AGRs to non-IID settings. We also provide a
detailed convergence analysis when the existing robust AGRs are combined with
GAS. Experiments on various real-world datasets verify the efficacy of our
proposed GAS. The implementation code is provided in
https://github.com/YuchenLiu-a/byzantine-gas.
Related papers
- Self-Guided Robust Graph Structure Refinement [37.235898707554284]
We propose a self-guided graph structure refinement (GSR) framework to defend GNNs against adversarial attacks.
In this paper, we demonstrate the effectiveness of SG-GSR under various scenarios including non-targeted attacks, targeted attacks, feature attacks, e-commerce fraud, and noisy node labels.
arXiv Detail & Related papers (2024-02-19T05:00:07Z) - AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks
Through Local Update Amplification [13.989900030876012]
Byzantine-robust aggregation rules (AGRs) are proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants.
This paper introduces a novel approach called AGRAMPLIFIER, aiming to simultaneously improve the robustness, fidelity, and efficiency of the existing AGRs.
The core idea of AGRAMPLIFIER is to amplify the "morality" of local updates by identifying the most repressive features of each gradient update.
arXiv Detail & Related papers (2023-11-13T00:34:45Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Linear Scalarization for Byzantine-robust learning on non-IID data [3.098066034546503]
We study the problem of Byzantine-robust learning when data among clients is heterogeneous.
We propose the use of Linear Scalarization (LS) as an enhancing method to enable current defenses to circumvent Byzantine attacks in the non-IID setting.
arXiv Detail & Related papers (2022-10-15T13:24:00Z) - Understanding and Improving Graph Injection Attack by Promoting
Unnoticeability [69.3530705476563]
Graph Injection Attack (GIA) is a practical attack scenario on Graph Neural Networks (GNNs)
We compare GIA with Graph Modification Attack (GMA) and find that GIA can be provably more harmful than GMA due to its relatively high flexibility.
We introduce a novel constraint -- homophily unnoticeability that enforces GIA to preserve the homophily, and propose Harmonious Adversarial Objective (HAO) to instantiate it.
arXiv Detail & Related papers (2022-02-16T13:41:39Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Federated Variance-Reduced Stochastic Gradient Descent with Robustness
to Byzantine Attacks [74.36161581953658]
This paper deals with distributed finite-sum optimization for learning over networks in the presence of malicious Byzantine attacks.
To cope with such attacks, most resilient approaches so far combine gradient descent (SGD) with different robust aggregation rules.
The present work puts forth a Byzantine attack resilient distributed (Byrd-) SAGA approach for learning tasks involving finite-sum optimization over networks.
arXiv Detail & Related papers (2019-12-29T19:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.