FedRISE: Rating Induced Sign Election of Gradients for Byzantine Tolerant Federated Aggregation
- URL: http://arxiv.org/abs/2411.03861v1
- Date: Wed, 06 Nov 2024 12:14:11 GMT
- Title: FedRISE: Rating Induced Sign Election of Gradients for Byzantine Tolerant Federated Aggregation
- Authors: Joseph Geo Benjamin, Mothilal Asokan, Mohammad Yaqub, Karthik Nandakumar,
- Abstract summary: We develop a robust aggregator called FedRISE for cross-silo FL that is consistent and less susceptible to poisoning updates by an omniscient attacker.
We compare our method against 8 robust aggregators under 6 poisoning attacks on 3 datasets and architectures.
Our results show that existing robust aggregators collapse for at least some attacks under severe settings, while FedRISE demonstrates better robustness because of a stringent gradient inclusion formulation.
- Score: 5.011091042850546
- License:
- Abstract: One of the most common defense strategies against model poisoning in federated learning is to employ a robust aggregator mechanism that makes the training more resilient. Many of the existing Byzantine robust aggregators provide theoretical guarantees and are empirically effective against certain categories of attacks. However, we observe that certain high-strength attacks can subvert the aggregator and collapse the training. In addition, most aggregators require identifying tolerant settings to converge. Impact of attacks becomes more pronounced when the number of Byzantines is near-majority, and becomes harder to evade if the attacker is omniscient with access to data, honest updates and aggregation methods. Motivated by these observations, we develop a robust aggregator called FedRISE for cross-silo FL that is consistent and less susceptible to poisoning updates by an omniscient attacker. The proposed method explicitly determines the optimal direction of each gradient through a sign-voting strategy that uses variance-reduced sparse gradients. We argue that vote weighting based on the cosine similarity of raw gradients is misleading, and we introduce a sign-based gradient valuation function that ignores the gradient magnitude. We compare our method against 8 robust aggregators under 6 poisoning attacks on 3 datasets and architectures. Our results show that existing robust aggregators collapse for at least some attacks under severe settings, while FedRISE demonstrates better robustness because of a stringent gradient inclusion formulation.
Related papers
- Mean Aggregator Is More Robust Than Robust Aggregators Under Label Poisoning Attacks [30.794799609766915]
We show that robust aggregators are too conservative for a class of weak but practical malicious attacks, as known as label poisoning attacks.
Surprisingly, we are able to show that the mean aggregator is more robust than the state-of-the-art robust aggregators in theory.
arXiv Detail & Related papers (2024-04-21T12:49:12Z) - FedRDF: A Robust and Dynamic Aggregation Function against Poisoning
Attacks in Federated Learning [0.0]
Federated Learning (FL) represents a promising approach to typical privacy concerns associated with centralized Machine Learning (ML) deployments.
Despite its well-known advantages, FL is vulnerable to security attacks such as Byzantine behaviors and poisoning attacks.
Our proposed approach was tested against various model poisoning attacks, demonstrating superior performance over state-of-the-art aggregation methods.
arXiv Detail & Related papers (2024-02-15T16:42:04Z) - DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial
Purification [63.65630243675792]
Diffusion-based purification defenses leverage diffusion models to remove crafted perturbations of adversarial examples.
Recent studies show that even advanced attacks cannot break such defenses effectively.
We propose a unified framework DiffAttack to perform effective and efficient attacks against diffusion-based purification defenses.
arXiv Detail & Related papers (2023-10-27T15:17:50Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Detection and Mitigation of Byzantine Attacks in Distributed Training [24.951227624475443]
An abnormal Byzantine behavior of the worker nodes can derail the training and compromise the quality of the inference.
Recent work considers a wide range of attack models and has explored robust aggregation and/or computational redundancy to correct the distorted gradients.
In this work, we consider attack models ranging from strong ones: $q$ omniscient adversaries with full knowledge of the defense protocol that can change from iteration to iteration to weak ones: $q$ randomly chosen adversaries with limited collusion abilities.
arXiv Detail & Related papers (2022-08-17T05:49:52Z) - Enhancing Privacy against Inversion Attacks in Federated Learning by
using Mixing Gradients Strategies [0.31498833540989407]
Federated learning reduces the risk of information leakage, but remains vulnerable to attacks.
We show how several neural network design decisions can defend against gradients inversion attacks.
These strategies are also shown to be useful for deep convolutional neural networks such as LeNET for image recognition.
arXiv Detail & Related papers (2022-04-26T12:08:28Z) - Improved Certified Defenses against Data Poisoning with (Deterministic)
Finite Aggregation [122.83280749890078]
We propose an improved certified defense against general poisoning attacks, namely Finite Aggregation.
In contrast to DPA, which directly splits the training set into disjoint subsets, our method first splits the training set into smaller disjoint subsets.
We offer an alternative view of our method, bridging the designs of deterministic and aggregation-based certified defenses.
arXiv Detail & Related papers (2022-02-05T20:08:58Z) - Byzantine-robust Federated Learning through Collaborative Malicious
Gradient Filtering [32.904425716385575]
We show that element-wise sign of gradient vector can provide valuable insight in detecting model poisoning attacks.
We propose a novel approach called textitSignGuard to enable Byzantine-robust federated learning through collaborative malicious gradient filtering.
arXiv Detail & Related papers (2021-09-13T11:15:15Z) - Learning from History for Byzantine Robust Optimization [52.68913869776858]
Byzantine robustness has received significant attention recently given its importance for distributed learning.
We show that most existing robust aggregation rules may not converge even in the absence of any Byzantine attackers.
arXiv Detail & Related papers (2020-12-18T16:22:32Z) - Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial
Robustness [75.30116479840619]
In this paper, we identify a more subtle situation called Imbalanced Gradients that can also cause overestimated adversarial robustness.
The phenomenon of imbalanced gradients occurs when the gradient of one term of the margin loss dominates and pushes the attack towards a suboptimal direction.
We propose a Margin Decomposition (MD) attack that decomposes a margin loss into individual terms and then explores the attackability of these terms separately.
arXiv Detail & Related papers (2020-06-24T13:41:37Z) - Federated Variance-Reduced Stochastic Gradient Descent with Robustness
to Byzantine Attacks [74.36161581953658]
This paper deals with distributed finite-sum optimization for learning over networks in the presence of malicious Byzantine attacks.
To cope with such attacks, most resilient approaches so far combine gradient descent (SGD) with different robust aggregation rules.
The present work puts forth a Byzantine attack resilient distributed (Byrd-) SAGA approach for learning tasks involving finite-sum optimization over networks.
arXiv Detail & Related papers (2019-12-29T19:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.