Security of Distributed Machine Learning: A Game-Theoretic Approach to
Design Secure DSVM
- URL: http://arxiv.org/abs/2003.04735v2
- Date: Sun, 26 Apr 2020 21:50:35 GMT
- Title: Security of Distributed Machine Learning: A Game-Theoretic Approach to
Design Secure DSVM
- Authors: Rui Zhang, Quanyan Zhu
- Abstract summary: This work aims to develop secure distributed algorithms to protect the learning from data poisoning and network attacks.
We establish a game-theoretic framework to capture the conflicting goals of a learner who uses distributed support vector machines (SVMs) and an attacker who is capable of modifying training data and labels.
The numerical results show that distributed SVM is prone to fail in different types of attacks, and their impact has a strong dependence on the network structure and attack capabilities.
- Score: 31.480769801354413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed machine learning algorithms play a significant role in processing
massive data sets over large networks. However, the increasing reliance on
machine learning on information and communication technologies (ICTs) makes it
inherently vulnerable to cyber threats. This work aims to develop secure
distributed algorithms to protect the learning from data poisoning and network
attacks. We establish a game-theoretic framework to capture the conflicting
goals of a learner who uses distributed support vector machines (SVMs) and an
attacker who is capable of modifying training data and labels. We develop a
fully distributed and iterative algorithm to capture real-time reactions of the
learner at each node to adversarial behaviors. The numerical results show that
distributed SVM is prone to fail in different types of attacks, and their
impact has a strong dependence on the network structure and attack
capabilities.
Related papers
- Multi-agent Reinforcement Learning-based Network Intrusion Detection System [3.4636217357968904]
Intrusion Detection Systems (IDS) play a crucial role in ensuring the security of computer networks.
We propose a novel multi-agent reinforcement learning (RL) architecture, enabling automatic, efficient, and robust network intrusion detection.
Our solution introduces a resilient architecture designed to accommodate the addition of new attacks and effectively adapt to changes in existing attack patterns.
arXiv Detail & Related papers (2024-07-08T09:18:59Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - GowFed -- A novel Federated Network Intrusion Detection System [0.15469452301122172]
This work presents GowFed, a novel network threat detection system that combines the usage of Gower Dissimilarity matrices and Federated averaging.
Different approaches of GowFed have been developed based on state-of the-art knowledge: (1) a vanilla version; and (2) a version instrumented with an attention mechanism.
Overall, GowFed intends to be the first stepping stone towards the combined usage of Federated Learning and Gower Dissimilarity matrices to detect network threats in industrial-level networks.
arXiv Detail & Related papers (2022-10-28T23:53:37Z) - Support Vector Machines under Adversarial Label Contamination [13.299257835329868]
We evaluate the security of Support Vector Machines (SVMs) to well-crafted, adversarial label noise attacks.
In particular, we consider an attacker that aims to formalize the SVM's classification error by flipping a number of labels.
We argue that our approach can also provide useful insights for developing more secure SVM learning algorithms.
arXiv Detail & Related papers (2022-06-01T09:38:07Z) - Learning to Detect: A Data-driven Approach for Network Intrusion
Detection [17.288512506016612]
We perform a comprehensive study on NSL-KDD, a network traffic dataset, by visualizing patterns and employing different learning-based models to detect cyber attacks.
Unlike previous shallow learning and deep learning models that use the single learning model approach for intrusion detection, we adopt a hierarchy strategy.
We demonstrate the advantage of the unsupervised representation learning model in binary intrusion detection tasks.
arXiv Detail & Related papers (2021-08-18T21:19:26Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z) - Data Poisoning Attacks on Federated Machine Learning [34.48190607495785]
Federated machine learning enables resource constrained node devices to learn a shared model while keeping the training data local.
The communication protocol amongst different nodes could be exploited by attackers to launch data poisoning attacks.
We propose a novel systems-aware optimization method, ATTack on Federated Learning (AT2FL)
arXiv Detail & Related papers (2020-04-19T03:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.