AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks
Through Local Update Amplification
- URL: http://arxiv.org/abs/2311.06996v2
- Date: Thu, 23 Nov 2023 11:30:13 GMT
- Title: AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks
Through Local Update Amplification
- Authors: Zirui Gong, Liyue Shen, Yanjun Zhang, Leo Yu Zhang, Jingwei Wang,
Guangdong Bai, and Yong Xiang
- Abstract summary: Byzantine-robust aggregation rules (AGRs) are proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants.
This paper introduces a novel approach called AGRAMPLIFIER, aiming to simultaneously improve the robustness, fidelity, and efficiency of the existing AGRs.
The core idea of AGRAMPLIFIER is to amplify the "morality" of local updates by identifying the most repressive features of each gradient update.
- Score: 13.989900030876012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The collaborative nature of federated learning (FL) poses a major threat in
the form of manipulation of local training data and local updates, known as the
Byzantine poisoning attack. To address this issue, many Byzantine-robust
aggregation rules (AGRs) have been proposed to filter out or moderate
suspicious local updates uploaded by Byzantine participants.
This paper introduces a novel approach called AGRAMPLIFIER, aiming to
simultaneously improve the robustness, fidelity, and efficiency of the existing
AGRs. The core idea of AGRAMPLIFIER is to amplify the "morality" of local
updates by identifying the most repressive features of each gradient update,
which provides a clearer distinction between malicious and benign updates,
consequently improving the detection effect. To achieve this objective, two
approaches, namely AGRMP and AGRXAI, are proposed. AGRMP organizes local
updates into patches and extracts the largest value from each patch, while
AGRXAI leverages explainable AI methods to extract the gradient of the most
activated features. By equipping AGRAMPLIFIER with the existing
Byzantine-robust mechanisms, we successfully enhance the model's robustness,
maintaining its fidelity and improving overall efficiency.
AGRAMPLIFIER is universally compatible with the existing Byzantine-robust
mechanisms. The paper demonstrates its effectiveness by integrating it with all
mainstream AGR mechanisms. Extensive evaluations conducted on seven datasets
from diverse domains against seven representative poisoning attacks
consistently show enhancements in robustness, fidelity, and efficiency, with
average gains of 40.08%, 39.18%, and 10.68%, respectively.
Related papers
- Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation [49.480978190805125]
Transfer attacks generate significant interest for black-box applications.
Existing works essentially directly optimize the single-level objective w.r.t. surrogate model.
We propose a bilevel optimization paradigm, which explicitly reforms the nested relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker.
arXiv Detail & Related papers (2024-06-04T07:45:27Z) - DiveR-CT: Diversity-enhanced Red Teaming with Relaxing Constraints [68.82294911302579]
We introduce DiveR-CT, which relaxes conventional constraints on the objective and semantic reward, granting greater freedom for the policy to enhance diversity.
Our experiments demonstrate DiveR-CT's marked superiority over baselines by 1) generating data that perform better in various diversity metrics across different attack success rate levels, 2) better-enhancing resiliency in blue team models through safety tuning based on collected data, 3) allowing dynamic control of objective weights for reliable and controllable attack success rates, and 4) reducing susceptibility to reward overoptimization.
arXiv Detail & Related papers (2024-05-29T12:12:09Z) - ADVREPAIR:Provable Repair of Adversarial Attack [15.580097790702508]
Deep neural networks (DNNs) are increasingly deployed in safety-critical domains, but their vulnerability to adversarial attacks poses serious safety risks.
Existing neuron-level methods using limited data lack efficacy in fixing adversaries due to the complexity of adversarial attack mechanisms.
We propose ADVREPAIR, a novel approach for provable repair of adversarial attacks using limited data.
arXiv Detail & Related papers (2024-04-02T05:16:59Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Model Stealing Attack against Graph Classification with Authenticity,
Uncertainty and Diversity [85.1927483219819]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - RADAP: A Robust and Adaptive Defense Against Diverse Adversarial Patches
on Face Recognition [13.618387142029663]
Face recognition systems powered by deep learning are vulnerable to adversarial attacks.
We propose RADAP, a robust and adaptive defense mechanism against diverse adversarial patches.
We conduct comprehensive experiments to validate the effectiveness of RADAP.
arXiv Detail & Related papers (2023-11-29T03:37:14Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - GaitGCI: Generative Counterfactual Intervention for Gait Recognition [15.348742723718964]
Gait is one of the most promising biometrics that aims to identify pedestrians from their walking patterns.
prevailing methods are susceptible to confounders, resulting in the networks hardly focusing on the regions that reflect effective walking patterns.
We propose a Generative Counterfactual Intervention framework, dubbed GaitGCI, consisting of Counterfactual Intervention Learning (CIL) and Diversity-Constrained Dynamic Convolution (DCDC)
arXiv Detail & Related papers (2023-06-06T05:59:23Z) - DAP: A Dynamic Adversarial Patch for Evading Person Detectors [8.187375378049353]
This paper introduces a novel approach that produces a Dynamic Adversarial Patch (DAP)
DAP maintains a naturalistic appearance while optimizing attack efficiency and robustness to real-world transformations.
Experimental results demonstrate that the proposed approach outperforms state-of-the-art attacks.
arXiv Detail & Related papers (2023-05-19T11:52:42Z) - Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting [58.91947205027892]
Federated learning has exhibited vulnerabilities to Byzantine attacks.
Byzantine attackers can send arbitrary gradients to a central server to destroy the convergence and performance of the global model.
A wealth of robust AGgregation Rules (AGRs) have been proposed to defend against Byzantine attacks.
arXiv Detail & Related papers (2023-02-13T03:31:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.