MixTailor: Mixed Gradient Aggregation for Robust Learning Against
Tailored Attacks
- URL: http://arxiv.org/abs/2207.07941v1
- Date: Sat, 16 Jul 2022 13:30:37 GMT
- Title: MixTailor: Mixed Gradient Aggregation for Robust Learning Against
Tailored Attacks
- Authors: Ali Ramezani-Kebrya and Iman Tabrizian and Fartash Faghri and Petar
Popovski
- Abstract summary: We introduce MixTailor, a scheme based on randomization of the aggregation strategies that makes it impossible for the attacker to be fully informed.
Our empirical studies across various datasets, attacks, and settings, validate our hypothesis and show that MixTailor successfully defends when well-known Byzantine-tolerant schemes fail.
- Score: 32.8090455006524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implementations of SGD on distributed and multi-GPU systems creates new
vulnerabilities, which can be identified and misused by one or more adversarial
agents. Recently, it has been shown that well-known Byzantine-resilient
gradient aggregation schemes are indeed vulnerable to informed attackers that
can tailor the attacks (Fang et al., 2020; Xie et al., 2020b). We introduce
MixTailor, a scheme based on randomization of the aggregation strategies that
makes it impossible for the attacker to be fully informed. Deterministic
schemes can be integrated into MixTailor on the fly without introducing any
additional hyperparameters. Randomization decreases the capability of a
powerful adversary to tailor its attacks, while the resulting randomized
aggregation scheme is still competitive in terms of performance. For both iid
and non-iid settings, we establish almost sure convergence guarantees that are
both stronger and more general than those available in the literature. Our
empirical studies across various datasets, attacks, and settings, validate our
hypothesis and show that MixTailor successfully defends when well-known
Byzantine-tolerant schemes fail.
Related papers
- FedRISE: Rating Induced Sign Election of Gradients for Byzantine Tolerant Federated Aggregation [5.011091042850546]
We develop a robust aggregator called FedRISE for cross-silo FL that is consistent and less susceptible to poisoning updates by an omniscient attacker.
We compare our method against 8 robust aggregators under 6 poisoning attacks on 3 datasets and architectures.
Our results show that existing robust aggregators collapse for at least some attacks under severe settings, while FedRISE demonstrates better robustness because of a stringent gradient inclusion formulation.
arXiv Detail & Related papers (2024-11-06T12:14:11Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - An Experimental Study of Byzantine-Robust Aggregation Schemes in
Federated Learning [4.627944480085717]
Byzantine-robust federated learning aims at mitigating Byzantine failures during the federated training process.
Several robust aggregation schemes have been proposed to defend against malicious updates from Byzantine clients.
We conduct an experimental study of Byzantine-robust aggregation schemes under different attacks using two popular algorithms in federated learning.
arXiv Detail & Related papers (2023-02-14T16:36:38Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Detection and Mitigation of Byzantine Attacks in Distributed Training [24.951227624475443]
An abnormal Byzantine behavior of the worker nodes can derail the training and compromise the quality of the inference.
Recent work considers a wide range of attack models and has explored robust aggregation and/or computational redundancy to correct the distorted gradients.
In this work, we consider attack models ranging from strong ones: $q$ omniscient adversaries with full knowledge of the defense protocol that can change from iteration to iteration to weak ones: $q$ randomly chosen adversaries with limited collusion abilities.
arXiv Detail & Related papers (2022-08-17T05:49:52Z) - Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the
Adversarial Transferability [20.255708227671573]
Black-box adversarial attacks can be transferred from one model to another.
In this work, we propose a novel ensemble attack method called the variance reduced ensemble attack.
Empirical results on the standard ImageNet demonstrate that the proposed method could boost the adversarial transferability and outperforms existing ensemble attacks significantly.
arXiv Detail & Related papers (2021-11-21T06:33:27Z) - Learning and Certification under Instance-targeted Poisoning [49.55596073963654]
We study PAC learnability and certification under instance-targeted poisoning attacks.
We show that when the budget of the adversary scales sublinearly with the sample complexity, PAC learnability and certification are achievable.
We empirically study the robustness of K nearest neighbour, logistic regression, multi-layer perceptron, and convolutional neural network on real data sets.
arXiv Detail & Related papers (2021-05-18T17:48:15Z) - Simeon -- Secure Federated Machine Learning Through Iterative Filtering [74.99517537968161]
Federated learning enables a global machine learning model to be trained collaboratively by distributed, mutually non-trusting learning agents.
A global model is distributed to clients, who perform training, and submit their newly-trained model to be aggregated into a superior model.
A class of Byzantine-tolerant aggregation algorithms has emerged, offering varying degrees of robustness against these attacks.
This paper presents Simeon: a novel approach to aggregation that applies a reputation-based iterative filtering technique.
arXiv Detail & Related papers (2021-03-13T12:17:47Z) - Learning from History for Byzantine Robust Optimization [52.68913869776858]
Byzantine robustness has received significant attention recently given its importance for distributed learning.
We show that most existing robust aggregation rules may not converge even in the absence of any Byzantine attackers.
arXiv Detail & Related papers (2020-12-18T16:22:32Z) - Extending Adversarial Attacks to Produce Adversarial Class Probability
Distributions [1.439518478021091]
We show that we can approximate any probability distribution for the classes while maintaining a high fooling rate.
Our results demonstrate that we can closely approximate any probability distribution for the classes while maintaining a high fooling rate.
arXiv Detail & Related papers (2020-04-14T09:39:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.