The Threat of Adversarial Attacks on Machine Learning in Network
Security -- A Survey
- URL: http://arxiv.org/abs/1911.02621v3
- Date: Tue, 21 Mar 2023 14:10:36 GMT
- Title: The Threat of Adversarial Attacks on Machine Learning in Network
Security -- A Survey
- Authors: Olakunle Ibitoye, Rana Abou-Khamis, Mohamed el Shehaby, Ashraf Matrawy
and M. Omair Shafiq
- Abstract summary: Applications of machine learning in network security face a more disproportionate threat of active adversarial attacks compared to other domains.
In this survey, we first provide a taxonomy of machine learning techniques, tasks, and depth.
We examine various adversarial attacks against machine learning in network security and introduce two classification approaches for adversarial attacks in network security.
- Score: 4.164845768197488
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Machine learning models have made many decision support systems to be faster,
more accurate, and more efficient. However, applications of machine learning in
network security face a more disproportionate threat of active adversarial
attacks compared to other domains. This is because machine learning
applications in network security such as malware detection, intrusion
detection, and spam filtering are by themselves adversarial in nature. In what
could be considered an arm's race between attackers and defenders, adversaries
constantly probe machine learning systems with inputs that are explicitly
designed to bypass the system and induce a wrong prediction. In this survey, we
first provide a taxonomy of machine learning techniques, tasks, and depth. We
then introduce a classification of machine learning in network security
applications. Next, we examine various adversarial attacks against machine
learning in network security and introduce two classification approaches for
adversarial attacks in network security. First, we classify adversarial attacks
in network security based on a taxonomy of network security applications.
Secondly, we categorize adversarial attacks in network security into a problem
space vs feature space dimensional classification model. We then analyze the
various defenses against adversarial attacks on machine learning-based network
security applications. We conclude by introducing an adversarial risk grid map
and evaluating several existing adversarial attacks against machine learning in
network security using the risk grid map. We also identify where each attack
classification resides within the adversarial risk grid map.
Related papers
- A reading survey on adversarial machine learning: Adversarial attacks
and their understanding [6.1678491628787455]
Adversarial Machine Learning exploits and understands some of the vulnerabilities that cause the neural networks to misclassify for near original input.
A class of algorithms called adversarial attacks is proposed to make the neural networks misclassify for various tasks in different domains.
This article provides a survey of existing adversarial attacks and their understanding based on different perspectives.
arXiv Detail & Related papers (2023-08-07T07:37:26Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Detect & Reject for Transferability of Black-box Adversarial Attacks
Against Network Intrusion Detection Systems [0.0]
We investigate the transferability of adversarial network traffic against machine learning-based intrusion detection systems.
We examine Detect & Reject as a defensive mechanism to limit the effect of the transferability property of adversarial network traffic against machine learning-based intrusion detection systems.
arXiv Detail & Related papers (2021-12-22T17:54:54Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Adversarial Machine Learning for Cybersecurity and Computer Vision:
Current Developments and Challenges [2.132096006921048]
Research in adversarial machine learning addresses a significant threat to the wide application of machine learning techniques.
We first discuss three main categories of attacks against machine learning techniques -- poisoning attacks, evasion attacks, and privacy attacks.
We notice adversarial samples in cybersecurity and computer vision are fundamentally different.
arXiv Detail & Related papers (2021-06-30T03:05:58Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - NAttack! Adversarial Attacks to bypass a GAN based classifier trained to
detect Network intrusion [0.3007949058551534]
Before the rise of machine learning, network anomalies which could imply an attack, were detected using well-crafted rules.
With the advancements of machine learning for network anomaly, it is not easy for a human to understand how to bypass a cyber-defence system.
In this paper, we show that even if we build a classifier and train it with adversarial examples for network data, we can use adversarial attacks and successfully break the system.
arXiv Detail & Related papers (2020-02-20T01:54:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.