Detect & Reject for Transferability of Black-box Adversarial Attacks
Against Network Intrusion Detection Systems
- URL: http://arxiv.org/abs/2112.12095v1
- Date: Wed, 22 Dec 2021 17:54:54 GMT
- Title: Detect & Reject for Transferability of Black-box Adversarial Attacks
Against Network Intrusion Detection Systems
- Authors: Islam Debicha, Thibault Debatty, Jean-Michel Dricot, Wim Mees, Tayeb
Kenaza
- Abstract summary: We investigate the transferability of adversarial network traffic against machine learning-based intrusion detection systems.
We examine Detect & Reject as a defensive mechanism to limit the effect of the transferability property of adversarial network traffic against machine learning-based intrusion detection systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the last decade, the use of Machine Learning techniques in anomaly-based
intrusion detection systems has seen much success. However, recent studies have
shown that Machine learning in general and deep learning specifically are
vulnerable to adversarial attacks where the attacker attempts to fool models by
supplying deceptive input. Research in computer vision, where this
vulnerability was first discovered, has shown that adversarial images designed
to fool a specific model can deceive other machine learning models. In this
paper, we investigate the transferability of adversarial network traffic
against multiple machine learning-based intrusion detection systems.
Furthermore, we analyze the robustness of the ensemble intrusion detection
system, which is notorious for its better accuracy compared to a single model,
against the transferability of adversarial attacks. Finally, we examine Detect
& Reject as a defensive mechanism to limit the effect of the transferability
property of adversarial network traffic against machine learning-based
intrusion detection systems.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Adversarial Explainability: Utilizing Explainable Machine Learning in Bypassing IoT Botnet Detection Systems [0.0]
Botnet detection based on machine learning has witnessed significant leaps in recent years.
adversarial attacks on machine learning-based cybersecurity systems are posing a significant threat to these solutions.
In this paper, we introduce a novel attack that utilizes machine learning model's explainability in evading detection by botnet detection systems.
arXiv Detail & Related papers (2023-09-29T18:20:05Z) - TAD: Transfer Learning-based Multi-Adversarial Detection of Evasion
Attacks against Network Intrusion Detection Systems [0.7829352305480285]
We implement existing state-of-the-art models for intrusion detection.
We then attack those models with a set of chosen evasion attacks.
In an attempt to detect those adversarial attacks, we design and implement multiple transfer learning-based adversarial detectors.
arXiv Detail & Related papers (2022-10-27T18:02:58Z) - Robust Transferable Feature Extractors: Learning to Defend Pre-Trained
Networks Against White Box Adversaries [69.53730499849023]
We show that adversarial examples can be successfully transferred to another independently trained model to induce prediction errors.
We propose a deep learning-based pre-processing mechanism, which we refer to as a robust transferable feature extractor (RTFE)
arXiv Detail & Related papers (2022-09-14T21:09:34Z) - An anomaly detection approach for backdoored neural networks: face
recognition as a case study [77.92020418343022]
We propose a novel backdoored network detection method based on the principle of anomaly detection.
We test our method on a novel dataset of backdoored networks and report detectability results with perfect scores.
arXiv Detail & Related papers (2022-08-22T12:14:13Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - Launching Adversarial Attacks against Network Intrusion Detection
Systems for IoT [5.077661193116692]
Technology is shifting towards a profit-driven Internet of Things market where security is an afterthought.
Traditional defending approaches are no longer sufficient to detect both known and unknown attacks to high accuracy.
Machine learning intrusion detection systems have proven their success in identifying unknown attacks with high precision.
arXiv Detail & Related papers (2021-04-26T09:36:29Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - NAttack! Adversarial Attacks to bypass a GAN based classifier trained to
detect Network intrusion [0.3007949058551534]
Before the rise of machine learning, network anomalies which could imply an attack, were detected using well-crafted rules.
With the advancements of machine learning for network anomaly, it is not easy for a human to understand how to bypass a cyber-defence system.
In this paper, we show that even if we build a classifier and train it with adversarial examples for network data, we can use adversarial attacks and successfully break the system.
arXiv Detail & Related papers (2020-02-20T01:54:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.