AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks
Through Local Update Amplification
- URL: http://arxiv.org/abs/2311.06996v2
- Date: Thu, 23 Nov 2023 11:30:13 GMT
- Title: AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks
Through Local Update Amplification
- Authors: Zirui Gong, Liyue Shen, Yanjun Zhang, Leo Yu Zhang, Jingwei Wang,
Guangdong Bai, and Yong Xiang
- Abstract summary: Byzantine-robust aggregation rules (AGRs) are proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants.
This paper introduces a novel approach called AGRAMPLIFIER, aiming to simultaneously improve the robustness, fidelity, and efficiency of the existing AGRs.
The core idea of AGRAMPLIFIER is to amplify the "morality" of local updates by identifying the most repressive features of each gradient update.
- Score: 13.989900030876012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The collaborative nature of federated learning (FL) poses a major threat in
the form of manipulation of local training data and local updates, known as the
Byzantine poisoning attack. To address this issue, many Byzantine-robust
aggregation rules (AGRs) have been proposed to filter out or moderate
suspicious local updates uploaded by Byzantine participants.
This paper introduces a novel approach called AGRAMPLIFIER, aiming to
simultaneously improve the robustness, fidelity, and efficiency of the existing
AGRs. The core idea of AGRAMPLIFIER is to amplify the "morality" of local
updates by identifying the most repressive features of each gradient update,
which provides a clearer distinction between malicious and benign updates,
consequently improving the detection effect. To achieve this objective, two
approaches, namely AGRMP and AGRXAI, are proposed. AGRMP organizes local
updates into patches and extracts the largest value from each patch, while
AGRXAI leverages explainable AI methods to extract the gradient of the most
activated features. By equipping AGRAMPLIFIER with the existing
Byzantine-robust mechanisms, we successfully enhance the model's robustness,
maintaining its fidelity and improving overall efficiency.
AGRAMPLIFIER is universally compatible with the existing Byzantine-robust
mechanisms. The paper demonstrates its effectiveness by integrating it with all
mainstream AGR mechanisms. Extensive evaluations conducted on seven datasets
from diverse domains against seven representative poisoning attacks
consistently show enhancements in robustness, fidelity, and efficiency, with
average gains of 40.08%, 39.18%, and 10.68%, respectively.
Related papers
- RoMA: Robust Malware Attribution via Byte-level Adversarial Training with Global Perturbations and Adversarial Consistency Regularization [17.387354788421742]
APT adversaries often conceal their identities, rendering attribution inherently adversarial.
Existing machine learning-based attribution models, while effective, remain highly vulnerable to adversarial attacks.
We propose RoMA, a novel single-step adversarial training approach that integrates global perturbations to generate enhanced adversarial samples.
arXiv Detail & Related papers (2025-02-11T11:51:12Z) - CASC-AI: Consensus-aware Self-corrective AI Agents for Noise Cell Segmentation [8.50335568530725]
Multi-class cell segmentation in high-resolution gigapixel whole slide images is crucial for various clinical applications.
Recent efforts have democratized this process by involving lay annotators without medical expertise.
We propose a consensus-aware self-corrective AI agent that leverages the Consensus Matrix to guide its learning process.
arXiv Detail & Related papers (2025-02-11T06:58:50Z) - FedRISE: Rating Induced Sign Election of Gradients for Byzantine Tolerant Federated Aggregation [5.011091042850546]
We develop a robust aggregator called FedRISE for cross-silo FL that is consistent and less susceptible to poisoning updates by an omniscient attacker.
We compare our method against 8 robust aggregators under 6 poisoning attacks on 3 datasets and architectures.
Our results show that existing robust aggregators collapse for at least some attacks under severe settings, while FedRISE demonstrates better robustness because of a stringent gradient inclusion formulation.
arXiv Detail & Related papers (2024-11-06T12:14:11Z) - Enhancing Transferability of Adversarial Attacks with GE-AdvGAN+: A Comprehensive Framework for Gradient Editing [12.131163373757383]
Transferable adversarial attacks pose significant threats to deep neural networks.
We propose a novel framework for gradient editing-based transferable attacks, named GE-AdvGAN+.
Our framework integrates nearly all mainstream attack methods to enhance transferability while significantly reducing computational resource consumption.
arXiv Detail & Related papers (2024-08-22T18:26:31Z) - Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation [49.480978190805125]
Transfer attacks generate significant interest for black-box applications.
Existing works essentially directly optimize the single-level objective w.r.t. surrogate model.
We propose a bilevel optimization paradigm, which explicitly reforms the nested relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker.
arXiv Detail & Related papers (2024-06-04T07:45:27Z) - DiveR-CT: Diversity-enhanced Red Teaming Large Language Model Assistants with Relaxing Constraints [68.82294911302579]
We introduce DiveR-CT, which relaxes conventional constraints on the objective and semantic reward, granting greater freedom for the policy to enhance diversity.
Our experiments demonstrate DiveR-CT's marked superiority over baselines by 1) generating data that perform better in various diversity metrics across different attack success rate levels, 2) better-enhancing resiliency in blue team models through safety tuning based on collected data, 3) allowing dynamic control of objective weights for reliable and controllable attack success rates, and 4) reducing susceptibility to reward overoptimization.
arXiv Detail & Related papers (2024-05-29T12:12:09Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting [58.91947205027892]
Federated learning has exhibited vulnerabilities to Byzantine attacks.
Byzantine attackers can send arbitrary gradients to a central server to destroy the convergence and performance of the global model.
A wealth of robust AGgregation Rules (AGRs) have been proposed to defend against Byzantine attacks.
arXiv Detail & Related papers (2023-02-13T03:31:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.