Review on the Feasibility of Adversarial Evasion Attacks and Defenses
for Network Intrusion Detection Systems
- URL: http://arxiv.org/abs/2303.07003v1
- Date: Mon, 13 Mar 2023 11:00:05 GMT
- Title: Review on the Feasibility of Adversarial Evasion Attacks and Defenses
for Network Intrusion Detection Systems
- Authors: Islam Debicha, Benjamin Cochez, Tayeb Kenaza, Thibault Debatty,
Jean-Michel Dricot, Wim Mees
- Abstract summary: Recent research raises many concerns in the cybersecurity field.
An increasing number of researchers are studying the feasibility of such attacks on security systems based on machine learning algorithms.
- Score: 0.7829352305480285
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Nowadays, numerous applications incorporate machine learning (ML) algorithms
due to their prominent achievements. However, many studies in the field of
computer vision have shown that ML can be fooled by intentionally crafted
instances, called adversarial examples. These adversarial examples take
advantage of the intrinsic vulnerability of ML models. Recent research raises
many concerns in the cybersecurity field. An increasing number of researchers
are studying the feasibility of such attacks on security systems based on ML
algorithms, such as Intrusion Detection Systems (IDS). The feasibility of such
adversarial attacks would be influenced by various domain-specific constraints.
This can potentially increase the difficulty of crafting adversarial examples.
Despite the considerable amount of research that has been done in this area,
much of it focuses on showing that it is possible to fool a model using
features extracted from the raw data but does not address the practical side,
i.e., the reverse transformation from theory to practice. For this reason, we
propose a review browsing through various important papers to provide a
comprehensive analysis. Our analysis highlights some challenges that have not
been addressed in the reviewed papers.
Related papers
- Adversarial Attacks on Machine Learning-Aided Visualizations [12.37960099024803]
ML4VIS approaches are susceptible to a range of ML-specific adversarial attacks.
These attacks can manipulate visualization generations, causing analysts to be tricked and their judgments to be impaired.
We investigate the potential vulnerabilities of ML-aided visualizations from adversarial attacks using a holistic lens of both visualization and ML perspectives.
arXiv Detail & Related papers (2024-09-04T07:23:12Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion
Detection Systems [0.7829352305480285]
A growing number of researchers are recently investigating the feasibility of such attacks against machine learning-based security systems.
This study was to investigate the actual feasibility of adversarial attacks, specifically evasion attacks, against network-based intrusion detection systems.
Our goal is to create adversarial botnet traffic that can avoid detection while still performing all of its intended malicious functionality.
arXiv Detail & Related papers (2023-03-12T14:01:00Z) - Poisoning Attacks and Defenses on Artificial Intelligence: A Survey [3.706481388415728]
Data poisoning attacks represent a type of attack that consists of tampering the data samples fed to the model during the training phase, leading to a degradation in the models accuracy during the inference phase.
This work compiles the most relevant insights and findings found in the latest existing literatures addressing this type of attacks.
A thorough assessment is performed on the reviewed works, comparing the effects of data poisoning on a wide range of ML models in real-world conditions.
arXiv Detail & Related papers (2022-02-21T14:43:38Z) - Adversarial Machine Learning In Network Intrusion Detection Domain: A
Systematic Review [0.0]
It has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions.
This survey explores the researches that employ different aspects of adversarial machine learning in the area of network intrusion detection.
arXiv Detail & Related papers (2021-12-06T19:10:23Z) - A Review of Adversarial Attack and Defense for Classification Methods [78.50824774203495]
This paper focuses on the generation and guarding of adversarial examples.
It is the hope of the authors that this paper will encourage more statisticians to work on this important and exciting field of generating and defending against adversarial examples.
arXiv Detail & Related papers (2021-11-18T22:13:43Z) - Adversarial Example Detection for DNN Models: A Review [13.131592630524905]
The aim of adversarial example (AE) is to fool the Deep Learning model which makes it a potential risk for DL applications.
Few reviews and surveys were published and theoretically showed the taxonomy of the threats and the countermeasure methods.
A detailed discussion for such methods is provided and experimental results for eight state-of-the-art detectors are presented.
arXiv Detail & Related papers (2021-05-01T09:55:17Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.