ByzShield: An Efficient and Robust System for Distributed Training
- URL: http://arxiv.org/abs/2010.04902v2
- Date: Thu, 4 Mar 2021 05:12:44 GMT
- Title: ByzShield: An Efficient and Robust System for Distributed Training
- Authors: Konstantinos Konstantinidis, Aditya Ramamoorthy
- Abstract summary: In this work we consider an omniscient attack model where the adversary has full knowledge about the gradient assignments of the workers.
Our redundancy-based method ByzShield leverages the properties of bipartite expander graphs for the assignment of tasks to workers.
Our experiments on training followed by image classification on the CIFAR-10 dataset show that ByzShield has on average a 20% advantage in accuracy under the most sophisticated attacks.
- Score: 12.741811850885309
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training of large scale models on distributed clusters is a critical
component of the machine learning pipeline. However, this training can easily
be made to fail if some workers behave in an adversarial (Byzantine) fashion
whereby they return arbitrary results to the parameter server (PS). A plethora
of existing papers consider a variety of attack models and propose robust
aggregation and/or computational redundancy to alleviate the effects of these
attacks. In this work we consider an omniscient attack model where the
adversary has full knowledge about the gradient computation assignments of the
workers and can choose to attack (up to) any q out of K worker nodes to induce
maximal damage. Our redundancy-based method ByzShield leverages the properties
of bipartite expander graphs for the assignment of tasks to workers; this helps
to effectively mitigate the effect of the Byzantine behavior. Specifically, we
demonstrate an upper bound on the worst case fraction of corrupted gradients
based on the eigenvalues of our constructions which are based on mutually
orthogonal Latin squares and Ramanujan graphs. Our numerical experiments
indicate over a 36% reduction on average in the fraction of corrupted gradients
compared to the state of the art. Likewise, our experiments on training
followed by image classification on the CIFAR-10 dataset show that ByzShield
has on average a 20% advantage in accuracy under the most sophisticated
attacks. ByzShield also tolerates a much larger fraction of adversarial nodes
compared to prior work.
Related papers
- Indiscriminate Disruption of Conditional Inference on Multivariate Gaussians [60.22542847840578]
Despite advances in adversarial machine learning, inference for Gaussian models in the presence of an adversary is notably understudied.
We consider a self-interested attacker who wishes to disrupt a decisionmaker's conditional inference and subsequent actions by corrupting a set of evidentiary variables.
To avoid detection, the attacker also desires the attack to appear plausible wherein plausibility is determined by the density of the corrupted evidence.
arXiv Detail & Related papers (2024-11-21T17:46:55Z) - ZeroPur: Succinct Training-Free Adversarial Purification [52.963392510839284]
Adversarial purification is a kind of defense computation technique that can defend various unseen adversarial attacks.
We present a simple adversarial purification method without further training to purify adversarial images, called ZeroPur.
arXiv Detail & Related papers (2024-06-05T10:58:15Z) - Defense Against Model Extraction Attacks on Recommender Systems [53.127820987326295]
We introduce Gradient-based Ranking Optimization (GRO) to defend against model extraction attacks on recommender systems.
GRO aims to minimize the loss of the protected target model while maximizing the loss of the attacker's surrogate model.
Results show GRO's superior effectiveness in defending against model extraction attacks.
arXiv Detail & Related papers (2023-10-25T03:30:42Z) - Isolation and Induction: Training Robust Deep Neural Networks against
Model Stealing Attacks [51.51023951695014]
Existing model stealing defenses add deceptive perturbations to the victim's posterior probabilities to mislead the attackers.
This paper proposes Isolation and Induction (InI), a novel and effective training framework for model stealing defenses.
In contrast to adding perturbations over model predictions that harm the benign accuracy, we train models to produce uninformative outputs against stealing queries.
arXiv Detail & Related papers (2023-08-02T05:54:01Z) - Robust Distributed Learning Against Both Distributional Shifts and
Byzantine Attacks [29.34471516011148]
In distributed learning systems, issues may arise from two sources.
On one hand, due to distributional shifts between training data and test data, the model could exhibit poor out-of-sample performance.
On the other hand, a portion of trained nodes might be subject to byzantine attacks which could invalidate the model.
arXiv Detail & Related papers (2022-10-29T20:08:07Z) - What Does the Gradient Tell When Attacking the Graph Structure [44.44204591087092]
We present a theoretical demonstration revealing that attackers tend to increase inter-class edges due to the message passing mechanism of GNNs.
By connecting dissimilar nodes, attackers can more effectively corrupt node features, making such attacks more advantageous.
We propose an innovative attack loss that balances attack effectiveness and imperceptibility, sacrificing some attack effectiveness to attain greater imperceptibility.
arXiv Detail & Related papers (2022-08-26T15:45:20Z) - Detection and Mitigation of Byzantine Attacks in Distributed Training [24.951227624475443]
An abnormal Byzantine behavior of the worker nodes can derail the training and compromise the quality of the inference.
Recent work considers a wide range of attack models and has explored robust aggregation and/or computational redundancy to correct the distorted gradients.
In this work, we consider attack models ranging from strong ones: $q$ omniscient adversaries with full knowledge of the defense protocol that can change from iteration to iteration to weak ones: $q$ randomly chosen adversaries with limited collusion abilities.
arXiv Detail & Related papers (2022-08-17T05:49:52Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - Aspis: A Robust Detection System for Distributed Learning [13.90938823562779]
Machine learning systems can be compromised when some of the computing devices exhibit abnormal (Byzantine) behavior.
Our proposed method Aspis assigns gradient computations to worker nodes using a subset-based assignment.
We prove the Byzantine resilience and detection guarantees of Aspis under weak and strong attacks and extensively evaluate the system on various large-scale training scenarios.
arXiv Detail & Related papers (2021-08-05T07:24:38Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Adversarial Example Games [51.92698856933169]
Adrial Example Games (AEG) is a framework that models the crafting of adversarial examples.
AEG provides a new way to design adversarial examples by adversarially training a generator and aversa from a given hypothesis class.
We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2020-07-01T19:47:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.