Tolerating Adversarial Attacks and Byzantine Faults in Distributed
Machine Learning
- URL: http://arxiv.org/abs/2109.02018v1
- Date: Sun, 5 Sep 2021 07:55:02 GMT
- Title: Tolerating Adversarial Attacks and Byzantine Faults in Distributed
Machine Learning
- Authors: Yusen Wu, Hao Chen, Xin Wang, Chao Liu, Phuong Nguyen, Yelena Yesha
- Abstract summary: Adversarial attacks attempt to disrupt the training, retraining and utilizing of artificial intelligence and machine learning models.
We propose a novel distributed training algorithm, partial synchronous gradient descent (ParSGD), which defends adversarial attacks and/or tolerates Byzantine faults.
Our results show that using ParSGD, ML models can still produce accurate predictions as if it is not being attacked nor having failures at all when almost half of the nodes are being compromised or failed.
- Score: 12.464625883462515
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial attacks attempt to disrupt the training, retraining and utilizing
of artificial intelligence and machine learning models in large-scale
distributed machine learning systems. This causes security risks on its
prediction outcome. For example, attackers attempt to poison the model by
either presenting inaccurate misrepresentative data or altering the models'
parameters. In addition, Byzantine faults including software, hardware, network
issues occur in distributed systems which also lead to a negative impact on the
prediction outcome. In this paper, we propose a novel distributed training
algorithm, partial synchronous stochastic gradient descent (ParSGD), which
defends adversarial attacks and/or tolerates Byzantine faults. We demonstrate
the effectiveness of our algorithm under three common adversarial attacks again
the ML models and a Byzantine fault during the training phase. Our results show
that using ParSGD, ML models can still produce accurate predictions as if it is
not being attacked nor having failures at all when almost half of the nodes are
being compromised or failed. We will report the experimental evaluations of
ParSGD in comparison with other algorithms.
Related papers
- Indiscriminate Disruption of Conditional Inference on Multivariate Gaussians [60.22542847840578]
Despite advances in adversarial machine learning, inference for Gaussian models in the presence of an adversary is notably understudied.
We consider a self-interested attacker who wishes to disrupt a decisionmaker's conditional inference and subsequent actions by corrupting a set of evidentiary variables.
To avoid detection, the attacker also desires the attack to appear plausible wherein plausibility is determined by the density of the corrupted evidence.
arXiv Detail & Related papers (2024-11-21T17:46:55Z) - Mitigating Data Injection Attacks on Federated Learning [20.24380409762923]
Federated learning is a technique that allows multiple entities to collaboratively train models using their data.
Despite its advantages, federated learning can be susceptible to false data injection attacks.
We propose a novel technique to detect and mitigate data injection attacks on federated learning systems.
arXiv Detail & Related papers (2023-12-04T18:26:31Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - Analyzing the Impact of Adversarial Examples on Explainable Machine
Learning [0.31498833540989407]
Adversarial attacks are a type of attack on machine learning models where an attacker deliberately modifies the inputs to cause the model to make incorrect predictions.
Work on the vulnerability of deep learning models to adversarial attacks has shown that it is very easy to make samples that make a model predict things that it doesn't want to.
In this work, we analyze the impact of model interpretability due to adversarial attacks on text classification problems.
arXiv Detail & Related papers (2023-07-17T08:50:36Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Detection and Mitigation of Byzantine Attacks in Distributed Training [24.951227624475443]
An abnormal Byzantine behavior of the worker nodes can derail the training and compromise the quality of the inference.
Recent work considers a wide range of attack models and has explored robust aggregation and/or computational redundancy to correct the distorted gradients.
In this work, we consider attack models ranging from strong ones: $q$ omniscient adversaries with full knowledge of the defense protocol that can change from iteration to iteration to weak ones: $q$ randomly chosen adversaries with limited collusion abilities.
arXiv Detail & Related papers (2022-08-17T05:49:52Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Adversarial Attacks on Knowledge Graph Embeddings via Instance
Attribution Methods [8.793721044482613]
We study data poisoning attacks against Knowledge Graph Embeddings (KGE) models for link prediction.
These attacks craft adversarial additions or deletions at training time to cause model failure at test time.
We propose a method to replace one of the two entities in each influential triple to generate adversarial additions.
arXiv Detail & Related papers (2021-11-04T19:38:48Z) - Learning and Certification under Instance-targeted Poisoning [49.55596073963654]
We study PAC learnability and certification under instance-targeted poisoning attacks.
We show that when the budget of the adversary scales sublinearly with the sample complexity, PAC learnability and certification are achievable.
We empirically study the robustness of K nearest neighbour, logistic regression, multi-layer perceptron, and convolutional neural network on real data sets.
arXiv Detail & Related papers (2021-05-18T17:48:15Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Adversarial Machine Learning in Network Intrusion Detection Systems [6.18778092044887]
We study the nature of the adversarial problem in Network Intrusion Detection Systems.
We use evolutionary computation (particle swarm optimization and genetic algorithm) and deep learning (generative adversarial networks) as tools for adversarial example generation.
Our work highlights the vulnerability of machine learning based NIDS in the face of adversarial perturbation.
arXiv Detail & Related papers (2020-04-23T19:47:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.